00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3669 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3271 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.378 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.391 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.402 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.402 > git config core.sparsecheckout # timeout=10 00:00:06.412 > git read-tree -mu HEAD # timeout=10 00:00:06.428 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.450 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.450 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.578 [Pipeline] Start of Pipeline 00:00:06.592 [Pipeline] library 00:00:06.593 Loading library shm_lib@master 00:00:06.594 Library shm_lib@master is cached. Copying from home. 00:00:06.613 [Pipeline] node 00:00:06.622 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.625 [Pipeline] { 00:00:06.634 [Pipeline] catchError 00:00:06.635 [Pipeline] { 00:00:06.651 [Pipeline] wrap 00:00:06.664 [Pipeline] { 00:00:06.671 [Pipeline] stage 00:00:06.673 [Pipeline] { (Prologue) 00:00:06.845 [Pipeline] sh 00:00:07.125 + logger -p user.info -t JENKINS-CI 00:00:07.146 [Pipeline] echo 00:00:07.148 Node: WFP8 00:00:07.155 [Pipeline] sh 00:00:07.456 [Pipeline] setCustomBuildProperty 00:00:07.470 [Pipeline] echo 00:00:07.472 Cleanup processes 00:00:07.479 [Pipeline] sh 00:00:07.764 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.764 1303175 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.778 [Pipeline] sh 00:00:08.065 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.065 ++ grep -v 'sudo pgrep' 00:00:08.065 ++ awk '{print $1}' 00:00:08.065 + sudo kill -9 00:00:08.065 + true 00:00:08.081 [Pipeline] cleanWs 00:00:08.092 [WS-CLEANUP] Deleting project workspace... 00:00:08.092 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.099 [WS-CLEANUP] done 00:00:08.104 [Pipeline] setCustomBuildProperty 00:00:08.122 [Pipeline] sh 00:00:08.406 + sudo git config --global --replace-all safe.directory '*' 00:00:08.503 [Pipeline] httpRequest 00:00:08.533 [Pipeline] echo 00:00:08.535 Sorcerer 10.211.164.101 is alive 00:00:08.546 [Pipeline] httpRequest 00:00:08.552 HttpMethod: GET 00:00:08.552 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.553 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.569 Response Code: HTTP/1.1 200 OK 00:00:08.570 Success: Status code 200 is in the accepted range: 200,404 00:00:08.570 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:15.680 [Pipeline] sh 00:00:15.965 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:15.982 [Pipeline] httpRequest 00:00:16.004 [Pipeline] echo 00:00:16.006 Sorcerer 10.211.164.101 is alive 00:00:16.015 [Pipeline] httpRequest 00:00:16.020 HttpMethod: GET 00:00:16.021 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:16.022 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:16.023 Response Code: HTTP/1.1 200 OK 00:00:16.024 Success: Status code 200 is in the accepted range: 200,404 00:00:16.024 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:33.446 [Pipeline] sh 00:00:33.728 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:36.276 [Pipeline] sh 00:00:36.560 + git -C spdk log --oneline -n5 00:00:36.560 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:00:36.560 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:00:36.560 2d30d9f83 accel: introduce tasks in sequence limit 00:00:36.560 2728651ee accel: adjust task per ch define name 00:00:36.560 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:36.580 [Pipeline] withCredentials 00:00:36.591 > git --version # timeout=10 00:00:36.605 > git --version # 'git version 2.39.2' 00:00:36.622 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:36.625 [Pipeline] { 00:00:36.634 [Pipeline] retry 00:00:36.636 [Pipeline] { 00:00:36.654 [Pipeline] sh 00:00:36.938 + git ls-remote http://dpdk.org/git/dpdk main 00:00:39.500 [Pipeline] } 00:00:39.523 [Pipeline] // retry 00:00:39.528 [Pipeline] } 00:00:39.550 [Pipeline] // withCredentials 00:00:39.561 [Pipeline] httpRequest 00:00:39.582 [Pipeline] echo 00:00:39.584 Sorcerer 10.211.164.101 is alive 00:00:39.593 [Pipeline] httpRequest 00:00:39.598 HttpMethod: GET 00:00:39.599 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:39.599 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:39.611 Response Code: HTTP/1.1 200 OK 00:00:39.611 Success: Status code 200 is in the accepted range: 200,404 00:00:39.612 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:52.992 [Pipeline] sh 00:00:53.275 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:00:54.663 [Pipeline] sh 00:00:54.946 + git -C dpdk log --oneline -n5 00:00:54.946 fa8d2f7f28 version: 24.07-rc2 00:00:54.946 d4bc3c2e01 maintainers: update for cxgbe driver 00:00:54.946 2227c0ed9a maintainers: update for Microsoft drivers 00:00:54.946 8385370337 maintainers: update for Arm 00:00:54.946 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:00:54.957 [Pipeline] } 00:00:54.974 [Pipeline] // stage 00:00:54.983 [Pipeline] stage 00:00:54.985 [Pipeline] { (Prepare) 00:00:55.009 [Pipeline] writeFile 00:00:55.026 [Pipeline] sh 00:00:55.308 + logger -p user.info -t JENKINS-CI 00:00:55.323 [Pipeline] sh 00:00:55.607 + logger -p user.info -t JENKINS-CI 00:00:55.620 [Pipeline] sh 00:00:55.902 + cat autorun-spdk.conf 00:00:55.903 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.903 SPDK_TEST_NVMF=1 00:00:55.903 SPDK_TEST_NVME_CLI=1 00:00:55.903 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.903 SPDK_TEST_NVMF_NICS=e810 00:00:55.903 SPDK_TEST_VFIOUSER=1 00:00:55.903 SPDK_RUN_UBSAN=1 00:00:55.903 NET_TYPE=phy 00:00:55.903 SPDK_TEST_NATIVE_DPDK=main 00:00:55.903 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.910 RUN_NIGHTLY=1 00:00:55.916 [Pipeline] readFile 00:00:55.945 [Pipeline] withEnv 00:00:55.947 [Pipeline] { 00:00:55.964 [Pipeline] sh 00:00:56.249 + set -ex 00:00:56.249 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.249 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.249 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.249 ++ SPDK_TEST_NVMF=1 00:00:56.249 ++ SPDK_TEST_NVME_CLI=1 00:00:56.249 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.249 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.249 ++ SPDK_TEST_VFIOUSER=1 00:00:56.249 ++ SPDK_RUN_UBSAN=1 00:00:56.249 ++ NET_TYPE=phy 00:00:56.249 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:56.249 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:56.249 ++ RUN_NIGHTLY=1 00:00:56.249 + case $SPDK_TEST_NVMF_NICS in 00:00:56.249 + DRIVERS=ice 00:00:56.249 + [[ tcp == \r\d\m\a ]] 00:00:56.249 + [[ -n ice ]] 00:00:56.249 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.249 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.249 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:56.249 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.249 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.249 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.249 + true 00:00:56.249 + for D in $DRIVERS 00:00:56.249 + sudo modprobe ice 00:00:56.249 + exit 0 00:00:56.258 [Pipeline] } 00:00:56.272 [Pipeline] // withEnv 00:00:56.276 [Pipeline] } 00:00:56.293 [Pipeline] // stage 00:00:56.304 [Pipeline] catchError 00:00:56.307 [Pipeline] { 00:00:56.324 [Pipeline] timeout 00:00:56.324 Timeout set to expire in 50 min 00:00:56.350 [Pipeline] { 00:00:56.367 [Pipeline] stage 00:00:56.370 [Pipeline] { (Tests) 00:00:56.383 [Pipeline] sh 00:00:56.664 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.664 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.664 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.664 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:56.664 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.664 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.664 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:56.664 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.664 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:56.664 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:56.664 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:56.664 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:56.664 + source /etc/os-release 00:00:56.664 ++ NAME='Fedora Linux' 00:00:56.664 ++ VERSION='38 (Cloud Edition)' 00:00:56.664 ++ ID=fedora 00:00:56.664 ++ VERSION_ID=38 00:00:56.664 ++ VERSION_CODENAME= 00:00:56.664 ++ PLATFORM_ID=platform:f38 00:00:56.664 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:56.664 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:56.664 ++ LOGO=fedora-logo-icon 00:00:56.664 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:56.664 ++ HOME_URL=https://fedoraproject.org/ 00:00:56.664 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:56.664 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:56.664 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:56.664 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:56.664 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:56.664 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:56.664 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:56.664 ++ SUPPORT_END=2024-05-14 00:00:56.664 ++ VARIANT='Cloud Edition' 00:00:56.664 ++ VARIANT_ID=cloud 00:00:56.664 + uname -a 00:00:56.664 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:56.664 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:59.203 Hugepages 00:00:59.203 node hugesize free / total 00:00:59.203 node0 1048576kB 0 / 0 00:00:59.203 node0 2048kB 0 / 0 00:00:59.203 node1 1048576kB 0 / 0 00:00:59.203 node1 2048kB 0 / 0 00:00:59.203 00:00:59.203 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:59.203 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:59.203 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:59.203 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:59.203 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:59.203 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:59.203 + rm -f /tmp/spdk-ld-path 00:00:59.203 + source autorun-spdk.conf 00:00:59.203 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.203 ++ SPDK_TEST_NVMF=1 00:00:59.203 ++ SPDK_TEST_NVME_CLI=1 00:00:59.203 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.203 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.203 ++ SPDK_TEST_VFIOUSER=1 00:00:59.203 ++ SPDK_RUN_UBSAN=1 00:00:59.203 ++ NET_TYPE=phy 00:00:59.203 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:59.203 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.203 ++ RUN_NIGHTLY=1 00:00:59.203 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:59.203 + [[ -n '' ]] 00:00:59.203 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.203 + for M in /var/spdk/build-*-manifest.txt 00:00:59.203 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:59.203 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.203 + for M in /var/spdk/build-*-manifest.txt 00:00:59.203 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:59.203 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:59.203 ++ uname 00:00:59.203 + [[ Linux == \L\i\n\u\x ]] 00:00:59.203 + sudo dmesg -T 00:00:59.203 + sudo dmesg --clear 00:00:59.203 + dmesg_pid=1304124 00:00:59.203 + [[ Fedora Linux == FreeBSD ]] 00:00:59.203 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.203 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.203 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:59.203 + [[ -x /usr/src/fio-static/fio ]] 00:00:59.203 + export FIO_BIN=/usr/src/fio-static/fio 00:00:59.203 + FIO_BIN=/usr/src/fio-static/fio 00:00:59.203 + sudo dmesg -Tw 00:00:59.203 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:59.203 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:59.203 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:59.203 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.203 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.203 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:59.203 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.203 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.203 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.203 Test configuration: 00:00:59.203 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.203 SPDK_TEST_NVMF=1 00:00:59.203 SPDK_TEST_NVME_CLI=1 00:00:59.203 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.203 SPDK_TEST_NVMF_NICS=e810 00:00:59.203 SPDK_TEST_VFIOUSER=1 00:00:59.203 SPDK_RUN_UBSAN=1 00:00:59.203 NET_TYPE=phy 00:00:59.203 SPDK_TEST_NATIVE_DPDK=main 00:00:59.203 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.203 RUN_NIGHTLY=1 19:07:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:59.203 19:07:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:59.203 19:07:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:59.203 19:07:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:59.462 19:07:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.463 19:07:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.463 19:07:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.463 19:07:10 -- paths/export.sh@5 -- $ export PATH 00:00:59.463 19:07:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.463 19:07:10 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:59.463 19:07:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:59.463 19:07:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721063230.XXXXXX 00:00:59.463 19:07:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721063230.8Jao3v 00:00:59.463 19:07:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:59.463 19:07:10 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:00:59.463 19:07:10 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.463 19:07:10 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:59.463 19:07:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:59.463 19:07:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:59.463 19:07:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:59.463 19:07:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:59.463 19:07:10 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.463 19:07:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:59.463 19:07:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:59.463 19:07:10 -- pm/common@17 -- $ local monitor 00:00:59.463 19:07:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.463 19:07:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.463 19:07:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.463 19:07:10 -- pm/common@21 -- $ date +%s 00:00:59.463 19:07:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.463 19:07:10 -- pm/common@21 -- $ date +%s 00:00:59.463 19:07:10 -- pm/common@25 -- $ sleep 1 00:00:59.463 19:07:10 -- pm/common@21 -- $ date +%s 00:00:59.463 19:07:10 -- pm/common@21 -- $ date +%s 00:00:59.463 19:07:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721063230 00:00:59.463 19:07:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721063230 00:00:59.463 19:07:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721063230 00:00:59.463 19:07:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721063230 00:00:59.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721063230_collect-vmstat.pm.log 00:00:59.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721063230_collect-cpu-load.pm.log 00:00:59.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721063230_collect-cpu-temp.pm.log 00:00:59.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721063230_collect-bmc-pm.bmc.pm.log 00:01:00.403 19:07:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:00.403 19:07:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:00.403 19:07:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:00.403 19:07:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.403 19:07:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:00.403 Mon Jul 15 05:07:11 PM UTC 2024 00:01:00.403 19:07:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:00.403 v24.09-pre-209-ga95bbf233 00:01:00.403 19:07:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:00.403 19:07:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:00.403 19:07:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:00.403 19:07:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:00.403 19:07:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:00.403 19:07:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.403 ************************************ 00:01:00.403 START TEST ubsan 00:01:00.403 ************************************ 00:01:00.403 19:07:11 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:00.403 using ubsan 00:01:00.403 00:01:00.403 real 0m0.000s 00:01:00.403 user 0m0.000s 00:01:00.403 sys 0m0.000s 00:01:00.403 19:07:11 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:00.403 19:07:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:00.403 ************************************ 00:01:00.403 END TEST ubsan 00:01:00.403 ************************************ 00:01:00.403 19:07:11 -- common/autotest_common.sh@1142 -- $ return 0 00:01:00.403 19:07:11 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:00.403 19:07:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:00.403 19:07:11 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:00.403 19:07:11 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:00.403 19:07:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:00.403 19:07:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.403 ************************************ 00:01:00.403 START TEST build_native_dpdk 00:01:00.403 ************************************ 00:01:00.403 19:07:11 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:00.403 fa8d2f7f28 version: 24.07-rc2 00:01:00.403 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:00.403 2227c0ed9a maintainers: update for Microsoft drivers 00:01:00.403 8385370337 maintainers: update for Arm 00:01:00.403 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:00.403 19:07:11 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:00.403 19:07:11 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:00.663 19:07:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:00.663 19:07:11 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:00.663 patching file config/rte_config.h 00:01:00.663 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:00.663 19:07:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:04.865 The Meson build system 00:01:04.865 Version: 1.3.1 00:01:04.865 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:04.865 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:04.865 Build type: native build 00:01:04.865 Program cat found: YES (/usr/bin/cat) 00:01:04.865 Project name: DPDK 00:01:04.865 Project version: 24.07.0-rc2 00:01:04.865 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.865 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:04.865 Host machine cpu family: x86_64 00:01:04.865 Host machine cpu: x86_64 00:01:04.865 Message: ## Building in Developer Mode ## 00:01:04.865 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:04.865 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:04.865 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:04.865 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:04.865 Program cat found: YES (/usr/bin/cat) 00:01:04.865 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:04.865 Compiler for C supports arguments -march=native: YES 00:01:04.865 Checking for size of "void *" : 8 00:01:04.865 Checking for size of "void *" : 8 (cached) 00:01:04.865 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:04.865 Library m found: YES 00:01:04.865 Library numa found: YES 00:01:04.865 Has header "numaif.h" : YES 00:01:04.865 Library fdt found: NO 00:01:04.865 Library execinfo found: NO 00:01:04.865 Has header "execinfo.h" : YES 00:01:04.865 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.865 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:04.865 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:04.865 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:04.865 Run-time dependency openssl found: YES 3.0.9 00:01:04.865 Run-time dependency libpcap found: YES 1.10.4 00:01:04.865 Has header "pcap.h" with dependency libpcap: YES 00:01:04.865 Compiler for C supports arguments -Wcast-qual: YES 00:01:04.865 Compiler for C supports arguments -Wdeprecated: YES 00:01:04.865 Compiler for C supports arguments -Wformat: YES 00:01:04.865 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:04.865 Compiler for C supports arguments -Wformat-security: NO 00:01:04.865 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.865 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:04.865 Compiler for C supports arguments -Wnested-externs: YES 00:01:04.865 Compiler for C supports arguments -Wold-style-definition: YES 00:01:04.865 Compiler for C supports arguments -Wpointer-arith: YES 00:01:04.865 Compiler for C supports arguments -Wsign-compare: YES 00:01:04.865 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:04.865 Compiler for C supports arguments -Wundef: YES 00:01:04.865 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.865 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:04.865 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:04.865 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.865 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:04.865 Program objdump found: YES (/usr/bin/objdump) 00:01:04.865 Compiler for C supports arguments -mavx512f: YES 00:01:04.865 Checking if "AVX512 checking" compiles: YES 00:01:04.865 Fetching value of define "__SSE4_2__" : 1 00:01:04.865 Fetching value of define "__AES__" : 1 00:01:04.865 Fetching value of define "__AVX__" : 1 00:01:04.865 Fetching value of define "__AVX2__" : 1 00:01:04.865 Fetching value of define "__AVX512BW__" : 1 00:01:04.865 Fetching value of define "__AVX512CD__" : 1 00:01:04.865 Fetching value of define "__AVX512DQ__" : 1 00:01:04.865 Fetching value of define "__AVX512F__" : 1 00:01:04.865 Fetching value of define "__AVX512VL__" : 1 00:01:04.865 Fetching value of define "__PCLMUL__" : 1 00:01:04.865 Fetching value of define "__RDRND__" : 1 00:01:04.865 Fetching value of define "__RDSEED__" : 1 00:01:04.865 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:04.865 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:04.865 Message: lib/log: Defining dependency "log" 00:01:04.865 Message: lib/kvargs: Defining dependency "kvargs" 00:01:04.865 Message: lib/argparse: Defining dependency "argparse" 00:01:04.865 Message: lib/telemetry: Defining dependency "telemetry" 00:01:04.865 Checking for function "getentropy" : NO 00:01:04.865 Message: lib/eal: Defining dependency "eal" 00:01:04.865 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:04.865 Message: lib/ring: Defining dependency "ring" 00:01:04.865 Message: lib/rcu: Defining dependency "rcu" 00:01:04.865 Message: lib/mempool: Defining dependency "mempool" 00:01:04.865 Message: lib/mbuf: Defining dependency "mbuf" 00:01:04.865 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:04.865 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:04.865 Compiler for C supports arguments -mpclmul: YES 00:01:04.865 Compiler for C supports arguments -maes: YES 00:01:04.865 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:04.865 Compiler for C supports arguments -mavx512bw: YES 00:01:04.865 Compiler for C supports arguments -mavx512dq: YES 00:01:04.865 Compiler for C supports arguments -mavx512vl: YES 00:01:04.865 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:04.865 Compiler for C supports arguments -mavx2: YES 00:01:04.865 Compiler for C supports arguments -mavx: YES 00:01:04.865 Message: lib/net: Defining dependency "net" 00:01:04.865 Message: lib/meter: Defining dependency "meter" 00:01:04.865 Message: lib/ethdev: Defining dependency "ethdev" 00:01:04.865 Message: lib/pci: Defining dependency "pci" 00:01:04.865 Message: lib/cmdline: Defining dependency "cmdline" 00:01:04.865 Message: lib/metrics: Defining dependency "metrics" 00:01:04.865 Message: lib/hash: Defining dependency "hash" 00:01:04.865 Message: lib/timer: Defining dependency "timer" 00:01:04.865 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:04.865 Message: lib/acl: Defining dependency "acl" 00:01:04.865 Message: lib/bbdev: Defining dependency "bbdev" 00:01:04.865 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:04.865 Run-time dependency libelf found: YES 0.190 00:01:04.865 Message: lib/bpf: Defining dependency "bpf" 00:01:04.865 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:04.865 Message: lib/compressdev: Defining dependency "compressdev" 00:01:04.865 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:04.865 Message: lib/distributor: Defining dependency "distributor" 00:01:04.865 Message: lib/dmadev: Defining dependency "dmadev" 00:01:04.865 Message: lib/efd: Defining dependency "efd" 00:01:04.865 Message: lib/eventdev: Defining dependency "eventdev" 00:01:04.865 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:04.865 Message: lib/gpudev: Defining dependency "gpudev" 00:01:04.865 Message: lib/gro: Defining dependency "gro" 00:01:04.865 Message: lib/gso: Defining dependency "gso" 00:01:04.865 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:04.865 Message: lib/jobstats: Defining dependency "jobstats" 00:01:04.865 Message: lib/latencystats: Defining dependency "latencystats" 00:01:04.865 Message: lib/lpm: Defining dependency "lpm" 00:01:04.865 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:04.865 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:04.865 Message: lib/member: Defining dependency "member" 00:01:04.865 Message: lib/pcapng: Defining dependency "pcapng" 00:01:04.865 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:04.865 Message: lib/power: Defining dependency "power" 00:01:04.865 Message: lib/rawdev: Defining dependency "rawdev" 00:01:04.865 Message: lib/regexdev: Defining dependency "regexdev" 00:01:04.865 Message: lib/mldev: Defining dependency "mldev" 00:01:04.865 Message: lib/rib: Defining dependency "rib" 00:01:04.865 Message: lib/reorder: Defining dependency "reorder" 00:01:04.865 Message: lib/sched: Defining dependency "sched" 00:01:04.865 Message: lib/security: Defining dependency "security" 00:01:04.865 Message: lib/stack: Defining dependency "stack" 00:01:04.865 Has header "linux/userfaultfd.h" : YES 00:01:04.865 Has header "linux/vduse.h" : YES 00:01:04.865 Message: lib/vhost: Defining dependency "vhost" 00:01:04.865 Message: lib/ipsec: Defining dependency "ipsec" 00:01:04.865 Message: lib/pdcp: Defining dependency "pdcp" 00:01:04.865 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:04.865 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:04.865 Message: lib/fib: Defining dependency "fib" 00:01:04.865 Message: lib/port: Defining dependency "port" 00:01:04.865 Message: lib/pdump: Defining dependency "pdump" 00:01:04.865 Message: lib/table: Defining dependency "table" 00:01:04.865 Message: lib/pipeline: Defining dependency "pipeline" 00:01:04.865 Message: lib/graph: Defining dependency "graph" 00:01:04.865 Message: lib/node: Defining dependency "node" 00:01:04.865 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:05.820 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:05.820 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:05.820 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:05.820 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:05.820 Compiler for C supports arguments -Wno-unused-value: YES 00:01:05.820 Compiler for C supports arguments -Wno-format: YES 00:01:05.820 Compiler for C supports arguments -Wno-format-security: YES 00:01:05.820 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:05.820 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:05.820 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:05.820 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:05.820 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.820 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:05.820 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:05.820 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:05.820 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:05.820 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:05.820 Has header "sys/epoll.h" : YES 00:01:05.820 Program doxygen found: YES (/usr/bin/doxygen) 00:01:05.820 Configuring doxy-api-html.conf using configuration 00:01:05.820 Configuring doxy-api-man.conf using configuration 00:01:05.820 Program mandb found: YES (/usr/bin/mandb) 00:01:05.820 Program sphinx-build found: NO 00:01:05.820 Configuring rte_build_config.h using configuration 00:01:05.820 Message: 00:01:05.820 ================= 00:01:05.820 Applications Enabled 00:01:05.820 ================= 00:01:05.820 00:01:05.820 apps: 00:01:05.820 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:05.820 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:05.820 test-pmd, test-regex, test-sad, test-security-perf, 00:01:05.820 00:01:05.820 Message: 00:01:05.820 ================= 00:01:05.820 Libraries Enabled 00:01:05.820 ================= 00:01:05.820 00:01:05.820 libs: 00:01:05.820 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:05.821 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:05.821 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:05.821 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:05.821 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:05.821 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:05.821 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:05.821 graph, node, 00:01:05.821 00:01:05.821 Message: 00:01:05.821 =============== 00:01:05.821 Drivers Enabled 00:01:05.821 =============== 00:01:05.821 00:01:05.821 common: 00:01:05.821 00:01:05.821 bus: 00:01:05.821 pci, vdev, 00:01:05.821 mempool: 00:01:05.821 ring, 00:01:05.821 dma: 00:01:05.821 00:01:05.821 net: 00:01:05.821 i40e, 00:01:05.821 raw: 00:01:05.821 00:01:05.821 crypto: 00:01:05.821 00:01:05.821 compress: 00:01:05.821 00:01:05.821 regex: 00:01:05.821 00:01:05.821 ml: 00:01:05.821 00:01:05.821 vdpa: 00:01:05.821 00:01:05.821 event: 00:01:05.821 00:01:05.821 baseband: 00:01:05.821 00:01:05.821 gpu: 00:01:05.821 00:01:05.821 00:01:05.821 Message: 00:01:05.821 ================= 00:01:05.821 Content Skipped 00:01:05.821 ================= 00:01:05.821 00:01:05.821 apps: 00:01:05.821 00:01:05.821 libs: 00:01:05.821 00:01:05.821 drivers: 00:01:05.821 common/cpt: not in enabled drivers build config 00:01:05.821 common/dpaax: not in enabled drivers build config 00:01:05.821 common/iavf: not in enabled drivers build config 00:01:05.821 common/idpf: not in enabled drivers build config 00:01:05.821 common/ionic: not in enabled drivers build config 00:01:05.821 common/mvep: not in enabled drivers build config 00:01:05.821 common/octeontx: not in enabled drivers build config 00:01:05.821 bus/auxiliary: not in enabled drivers build config 00:01:05.821 bus/cdx: not in enabled drivers build config 00:01:05.821 bus/dpaa: not in enabled drivers build config 00:01:05.821 bus/fslmc: not in enabled drivers build config 00:01:05.821 bus/ifpga: not in enabled drivers build config 00:01:05.821 bus/platform: not in enabled drivers build config 00:01:05.821 bus/uacce: not in enabled drivers build config 00:01:05.821 bus/vmbus: not in enabled drivers build config 00:01:05.821 common/cnxk: not in enabled drivers build config 00:01:05.821 common/mlx5: not in enabled drivers build config 00:01:05.821 common/nfp: not in enabled drivers build config 00:01:05.821 common/nitrox: not in enabled drivers build config 00:01:05.821 common/qat: not in enabled drivers build config 00:01:05.821 common/sfc_efx: not in enabled drivers build config 00:01:05.821 mempool/bucket: not in enabled drivers build config 00:01:05.821 mempool/cnxk: not in enabled drivers build config 00:01:05.821 mempool/dpaa: not in enabled drivers build config 00:01:05.821 mempool/dpaa2: not in enabled drivers build config 00:01:05.821 mempool/octeontx: not in enabled drivers build config 00:01:05.821 mempool/stack: not in enabled drivers build config 00:01:05.821 dma/cnxk: not in enabled drivers build config 00:01:05.821 dma/dpaa: not in enabled drivers build config 00:01:05.821 dma/dpaa2: not in enabled drivers build config 00:01:05.821 dma/hisilicon: not in enabled drivers build config 00:01:05.821 dma/idxd: not in enabled drivers build config 00:01:05.821 dma/ioat: not in enabled drivers build config 00:01:05.821 dma/odm: not in enabled drivers build config 00:01:05.821 dma/skeleton: not in enabled drivers build config 00:01:05.821 net/af_packet: not in enabled drivers build config 00:01:05.821 net/af_xdp: not in enabled drivers build config 00:01:05.821 net/ark: not in enabled drivers build config 00:01:05.821 net/atlantic: not in enabled drivers build config 00:01:05.821 net/avp: not in enabled drivers build config 00:01:05.821 net/axgbe: not in enabled drivers build config 00:01:05.821 net/bnx2x: not in enabled drivers build config 00:01:05.821 net/bnxt: not in enabled drivers build config 00:01:05.821 net/bonding: not in enabled drivers build config 00:01:05.821 net/cnxk: not in enabled drivers build config 00:01:05.821 net/cpfl: not in enabled drivers build config 00:01:05.821 net/cxgbe: not in enabled drivers build config 00:01:05.821 net/dpaa: not in enabled drivers build config 00:01:05.821 net/dpaa2: not in enabled drivers build config 00:01:05.821 net/e1000: not in enabled drivers build config 00:01:05.821 net/ena: not in enabled drivers build config 00:01:05.821 net/enetc: not in enabled drivers build config 00:01:05.821 net/enetfec: not in enabled drivers build config 00:01:05.821 net/enic: not in enabled drivers build config 00:01:05.821 net/failsafe: not in enabled drivers build config 00:01:05.821 net/fm10k: not in enabled drivers build config 00:01:05.821 net/gve: not in enabled drivers build config 00:01:05.821 net/hinic: not in enabled drivers build config 00:01:05.821 net/hns3: not in enabled drivers build config 00:01:05.821 net/iavf: not in enabled drivers build config 00:01:05.821 net/ice: not in enabled drivers build config 00:01:05.821 net/idpf: not in enabled drivers build config 00:01:05.821 net/igc: not in enabled drivers build config 00:01:05.821 net/ionic: not in enabled drivers build config 00:01:05.821 net/ipn3ke: not in enabled drivers build config 00:01:05.821 net/ixgbe: not in enabled drivers build config 00:01:05.821 net/mana: not in enabled drivers build config 00:01:05.821 net/memif: not in enabled drivers build config 00:01:05.821 net/mlx4: not in enabled drivers build config 00:01:05.821 net/mlx5: not in enabled drivers build config 00:01:05.821 net/mvneta: not in enabled drivers build config 00:01:05.821 net/mvpp2: not in enabled drivers build config 00:01:05.821 net/netvsc: not in enabled drivers build config 00:01:05.821 net/nfb: not in enabled drivers build config 00:01:05.821 net/nfp: not in enabled drivers build config 00:01:05.821 net/ngbe: not in enabled drivers build config 00:01:05.821 net/null: not in enabled drivers build config 00:01:05.821 net/octeontx: not in enabled drivers build config 00:01:05.821 net/octeon_ep: not in enabled drivers build config 00:01:05.821 net/pcap: not in enabled drivers build config 00:01:05.821 net/pfe: not in enabled drivers build config 00:01:05.821 net/qede: not in enabled drivers build config 00:01:05.821 net/ring: not in enabled drivers build config 00:01:05.821 net/sfc: not in enabled drivers build config 00:01:05.821 net/softnic: not in enabled drivers build config 00:01:05.821 net/tap: not in enabled drivers build config 00:01:05.821 net/thunderx: not in enabled drivers build config 00:01:05.821 net/txgbe: not in enabled drivers build config 00:01:05.821 net/vdev_netvsc: not in enabled drivers build config 00:01:05.821 net/vhost: not in enabled drivers build config 00:01:05.821 net/virtio: not in enabled drivers build config 00:01:05.821 net/vmxnet3: not in enabled drivers build config 00:01:05.821 raw/cnxk_bphy: not in enabled drivers build config 00:01:05.821 raw/cnxk_gpio: not in enabled drivers build config 00:01:05.821 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:05.821 raw/ifpga: not in enabled drivers build config 00:01:05.821 raw/ntb: not in enabled drivers build config 00:01:05.821 raw/skeleton: not in enabled drivers build config 00:01:05.821 crypto/armv8: not in enabled drivers build config 00:01:05.821 crypto/bcmfs: not in enabled drivers build config 00:01:05.821 crypto/caam_jr: not in enabled drivers build config 00:01:05.821 crypto/ccp: not in enabled drivers build config 00:01:05.821 crypto/cnxk: not in enabled drivers build config 00:01:05.821 crypto/dpaa_sec: not in enabled drivers build config 00:01:05.821 crypto/dpaa2_sec: not in enabled drivers build config 00:01:05.821 crypto/ionic: not in enabled drivers build config 00:01:05.821 crypto/ipsec_mb: not in enabled drivers build config 00:01:05.821 crypto/mlx5: not in enabled drivers build config 00:01:05.821 crypto/mvsam: not in enabled drivers build config 00:01:05.821 crypto/nitrox: not in enabled drivers build config 00:01:05.821 crypto/null: not in enabled drivers build config 00:01:05.821 crypto/octeontx: not in enabled drivers build config 00:01:05.821 crypto/openssl: not in enabled drivers build config 00:01:05.821 crypto/scheduler: not in enabled drivers build config 00:01:05.821 crypto/uadk: not in enabled drivers build config 00:01:05.821 crypto/virtio: not in enabled drivers build config 00:01:05.821 compress/isal: not in enabled drivers build config 00:01:05.821 compress/mlx5: not in enabled drivers build config 00:01:05.821 compress/nitrox: not in enabled drivers build config 00:01:05.821 compress/octeontx: not in enabled drivers build config 00:01:05.821 compress/uadk: not in enabled drivers build config 00:01:05.821 compress/zlib: not in enabled drivers build config 00:01:05.821 regex/mlx5: not in enabled drivers build config 00:01:05.821 regex/cn9k: not in enabled drivers build config 00:01:05.821 ml/cnxk: not in enabled drivers build config 00:01:05.821 vdpa/ifc: not in enabled drivers build config 00:01:05.821 vdpa/mlx5: not in enabled drivers build config 00:01:05.821 vdpa/nfp: not in enabled drivers build config 00:01:05.821 vdpa/sfc: not in enabled drivers build config 00:01:05.821 event/cnxk: not in enabled drivers build config 00:01:05.821 event/dlb2: not in enabled drivers build config 00:01:05.821 event/dpaa: not in enabled drivers build config 00:01:05.821 event/dpaa2: not in enabled drivers build config 00:01:05.821 event/dsw: not in enabled drivers build config 00:01:05.821 event/opdl: not in enabled drivers build config 00:01:05.821 event/skeleton: not in enabled drivers build config 00:01:05.821 event/sw: not in enabled drivers build config 00:01:05.821 event/octeontx: not in enabled drivers build config 00:01:05.821 baseband/acc: not in enabled drivers build config 00:01:05.821 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:05.821 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:05.821 baseband/la12xx: not in enabled drivers build config 00:01:05.821 baseband/null: not in enabled drivers build config 00:01:05.821 baseband/turbo_sw: not in enabled drivers build config 00:01:05.821 gpu/cuda: not in enabled drivers build config 00:01:05.821 00:01:05.821 00:01:05.821 Build targets in project: 221 00:01:05.821 00:01:05.821 DPDK 24.07.0-rc2 00:01:05.821 00:01:05.821 User defined options 00:01:05.821 libdir : lib 00:01:05.821 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:05.821 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:05.821 c_link_args : 00:01:05.821 enable_docs : false 00:01:05.821 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:05.821 enable_kmods : false 00:01:05.821 machine : native 00:01:05.821 tests : false 00:01:05.821 00:01:05.821 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:05.821 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:05.821 19:07:16 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:05.822 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:05.822 [1/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:06.086 [2/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:06.086 [3/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:06.086 [4/720] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:06.086 [5/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:06.086 [6/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:06.086 [7/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:06.086 [8/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:06.086 [9/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:06.086 [10/720] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:06.086 [11/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:06.086 [12/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:06.086 [13/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:06.086 [14/720] Linking static target lib/librte_kvargs.a 00:01:06.086 [15/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:06.086 [16/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:06.086 [17/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:06.353 [18/720] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:06.353 [19/720] Linking static target lib/librte_log.a 00:01:06.353 [20/720] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:06.353 [21/720] Linking static target lib/librte_pci.a 00:01:06.353 [22/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:06.353 [23/720] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:06.353 [24/720] Linking static target lib/librte_argparse.a 00:01:06.353 [25/720] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.353 [26/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:06.353 [27/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:06.612 [28/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:06.612 [29/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:06.612 [30/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:06.612 [31/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:06.612 [32/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:06.612 [33/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:06.612 [34/720] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.612 [35/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:06.612 [36/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:06.612 [37/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:06.612 [38/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:06.612 [39/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:06.612 [40/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:06.612 [41/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:06.612 [42/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:06.612 [43/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:06.612 [44/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:06.612 [45/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:06.612 [46/720] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:06.612 [47/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:06.612 [48/720] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:06.612 [49/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:06.612 [50/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.612 [51/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:06.612 [52/720] Linking static target lib/librte_meter.a 00:01:06.612 [53/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:06.612 [54/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:06.612 [55/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:06.612 [56/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:06.612 [57/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:06.612 [58/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:06.612 [59/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:06.612 [60/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:06.612 [61/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:06.612 [62/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:06.612 [63/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:06.612 [64/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:06.612 [65/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:06.612 [66/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:06.612 [67/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:06.612 [68/720] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:06.612 [69/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:06.612 [70/720] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:06.612 [71/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:06.612 [72/720] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:06.612 [73/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:06.612 [74/720] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.612 [75/720] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:06.612 [76/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:06.612 [77/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:06.612 [78/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:06.612 [79/720] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:06.612 [80/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:06.612 [81/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:06.875 [82/720] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:06.875 [83/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:06.875 [84/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:06.875 [85/720] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:06.875 [86/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:06.875 [87/720] Linking static target lib/librte_ring.a 00:01:06.875 [88/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:06.875 [89/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:06.875 [90/720] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:06.875 [91/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:06.875 [92/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:06.875 [93/720] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:06.875 [94/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:06.875 [95/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:06.875 [96/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:06.875 [97/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:06.875 [98/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:06.875 [99/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:06.875 [100/720] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:06.875 [101/720] Linking static target lib/librte_net.a 00:01:06.875 [102/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:06.875 [103/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:06.875 [104/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:06.875 [105/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.875 [106/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:06.875 [107/720] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.875 [108/720] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.875 [109/720] Linking target lib/librte_log.so.24.2 00:01:06.875 [110/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.135 [111/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.135 [112/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:07.135 [113/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.135 [114/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.135 [115/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.135 [116/720] Linking static target lib/librte_cmdline.a 00:01:07.135 [117/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.135 [118/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:07.135 [119/720] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:07.135 [120/720] Linking static target lib/librte_cfgfile.a 00:01:07.135 [121/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.135 [122/720] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:07.135 [123/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.135 [124/720] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.135 [125/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:07.135 [126/720] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:07.135 [127/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.135 [128/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.135 [129/720] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:07.135 [130/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:07.135 [131/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:07.135 [132/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:07.135 [133/720] Linking target lib/librte_kvargs.so.24.2 00:01:07.135 [134/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.135 [135/720] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.135 [136/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:07.135 [137/720] Linking static target lib/librte_metrics.a 00:01:07.399 [138/720] Linking target lib/librte_argparse.so.24.2 00:01:07.399 [139/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:07.399 [140/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:07.399 [141/720] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.399 [142/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:07.399 [143/720] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:07.399 [144/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.399 [145/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:07.399 [146/720] Linking static target lib/librte_mempool.a 00:01:07.399 [147/720] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.399 [148/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.399 [149/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.399 [150/720] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:07.399 [151/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.399 [152/720] Linking static target lib/librte_bitratestats.a 00:01:07.399 [153/720] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:07.399 [154/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.399 [155/720] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.399 [156/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.399 [157/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:07.399 [158/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:07.399 [159/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.399 [160/720] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:07.399 [161/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:07.399 [162/720] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:07.399 [163/720] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:07.399 [164/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:07.399 [165/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:07.665 [166/720] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:07.665 [167/720] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.665 [168/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:07.665 [169/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:07.665 [170/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:07.665 [171/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.665 [172/720] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:07.665 [173/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:07.665 [174/720] Linking static target lib/librte_compressdev.a 00:01:07.665 [175/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.665 [176/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:07.665 [177/720] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:07.665 [178/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:07.665 [179/720] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.665 [180/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:07.665 [181/720] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:07.665 [182/720] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:07.665 [183/720] Linking static target lib/librte_rcu.a 00:01:07.665 [184/720] Linking static target lib/librte_jobstats.a 00:01:07.665 [185/720] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.665 [186/720] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:07.665 [187/720] Linking static target lib/librte_timer.a 00:01:07.665 [188/720] Linking static target lib/librte_dispatcher.a 00:01:07.665 [189/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:07.665 [190/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:07.665 [191/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.665 [192/720] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:07.665 [193/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.665 [194/720] Linking static target lib/librte_telemetry.a 00:01:07.665 [195/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:07.665 [196/720] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.665 [197/720] Linking static target lib/librte_eal.a 00:01:07.927 [198/720] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:07.927 [199/720] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:07.927 [200/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:07.927 [201/720] Linking static target lib/librte_bbdev.a 00:01:07.927 [202/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:07.927 [203/720] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:07.927 [204/720] Linking static target lib/librte_gro.a 00:01:07.927 [205/720] Linking static target lib/librte_gpudev.a 00:01:07.927 [206/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:07.927 [207/720] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:07.927 [208/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:07.927 [209/720] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:07.927 [210/720] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.927 [211/720] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:07.927 [212/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:07.927 [213/720] Linking static target lib/librte_gso.a 00:01:07.927 [214/720] Linking static target lib/librte_distributor.a 00:01:07.927 [215/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:07.927 [216/720] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:07.927 [217/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:07.927 [218/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:07.927 [219/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.927 [220/720] Linking static target lib/librte_latencystats.a 00:01:07.927 [221/720] Linking static target lib/librte_dmadev.a 00:01:07.927 [222/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:07.927 [223/720] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:07.927 [224/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:07.927 [225/720] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:07.927 [226/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:07.927 [227/720] Linking static target lib/librte_mbuf.a 00:01:07.927 [228/720] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:07.927 [229/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:07.927 [230/720] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:07.927 [231/720] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:07.927 [232/720] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:07.927 [233/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:07.927 [234/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:08.191 [235/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:08.191 [236/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:08.191 [237/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:08.191 [238/720] Linking static target lib/librte_stack.a 00:01:08.191 [239/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:08.191 [240/720] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.191 [241/720] Linking static target lib/librte_ip_frag.a 00:01:08.191 [242/720] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [243/720] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.191 [244/720] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [245/720] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.191 [246/720] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [247/720] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:08.191 [248/720] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [249/720] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.191 [250/720] Linking static target lib/librte_regexdev.a 00:01:08.191 [251/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:08.191 [252/720] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [253/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:08.191 [254/720] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [255/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:08.191 [256/720] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.191 [257/720] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:08.191 [258/720] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.455 [259/720] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:08.455 [260/720] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.455 [261/720] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.455 [262/720] Linking static target lib/librte_rawdev.a 00:01:08.455 [263/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:08.455 [264/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:08.455 [265/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:08.455 [266/720] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:08.455 [267/720] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.455 [268/720] Linking static target lib/librte_pcapng.a 00:01:08.455 [269/720] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.455 [270/720] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.455 [271/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:08.455 [272/720] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.455 [273/720] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:08.456 [274/720] Linking static target lib/librte_power.a 00:01:08.456 [275/720] Linking static target lib/librte_reorder.a 00:01:08.456 [276/720] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:08.456 [277/720] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.456 [278/720] Linking static target lib/librte_bpf.a 00:01:08.456 [279/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:08.456 [280/720] Linking static target lib/librte_mldev.a 00:01:08.456 [281/720] Linking target lib/librte_telemetry.so.24.2 00:01:08.456 [282/720] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:08.456 [283/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:08.456 [284/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:08.456 [285/720] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.456 [286/720] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.456 [287/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:08.456 [288/720] Linking static target lib/librte_security.a 00:01:08.456 [289/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:08.456 [290/720] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.456 [291/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:08.456 [292/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.715 [293/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:08.715 [294/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:08.715 [295/720] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.715 [296/720] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:08.715 [297/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:08.715 [298/720] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:08.715 [299/720] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:08.715 [300/720] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.715 [301/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:08.715 [302/720] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.715 [303/720] Linking static target lib/librte_lpm.a 00:01:08.715 [304/720] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:08.715 [305/720] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:08.715 [306/720] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.715 [307/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:08.715 [308/720] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.715 [309/720] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.715 [310/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:08.715 [311/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:08.715 [312/720] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:08.715 [313/720] Linking static target lib/librte_rib.a 00:01:08.980 [314/720] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:08.980 [315/720] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.980 [316/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:08.980 [317/720] Linking static target lib/librte_efd.a 00:01:08.980 [318/720] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:08.980 [319/720] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:08.980 [320/720] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.980 [321/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.980 [322/720] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:08.980 [323/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:08.980 [324/720] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:08.980 [325/720] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.980 [326/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:08.980 [327/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:08.980 [328/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:08.980 [329/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:08.980 [330/720] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:08.980 [331/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:09.247 [332/720] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.247 [333/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:09.247 [334/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:09.247 [335/720] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.247 [336/720] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:09.247 [337/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:09.247 [338/720] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:09.247 [339/720] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:09.247 [340/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:09.247 [341/720] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:09.247 [342/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:09.247 [343/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:09.247 [344/720] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.247 [345/720] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:09.247 [346/720] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.247 [347/720] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:09.247 [348/720] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:09.247 [349/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:09.247 [350/720] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:09.247 [351/720] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:09.247 [352/720] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:09.506 [353/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:09.506 [354/720] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:09.506 [355/720] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:09.506 [356/720] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:09.506 [357/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:09.506 [358/720] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:09.506 [359/720] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.506 [360/720] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.506 [361/720] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:09.506 [362/720] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:09.506 [363/720] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:09.506 [364/720] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:09.506 [365/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:09.506 [366/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:09.506 [367/720] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:09.506 [368/720] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.506 [369/720] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:09.506 [370/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:09.506 [371/720] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:09.506 [372/720] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:09.506 [373/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:09.770 [374/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:09.770 [375/720] Linking static target lib/librte_fib.a 00:01:09.770 [376/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:09.770 [377/720] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:09.770 [378/720] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:09.770 [379/720] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:09.770 [380/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:09.770 [381/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:09.770 [382/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:09.770 [383/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:09.770 [384/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:09.770 [385/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:09.770 [386/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:09.770 [387/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:09.770 [388/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:09.770 [389/720] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:09.770 [390/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:09.770 [391/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:09.770 [392/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:09.770 [393/720] Linking static target lib/librte_graph.a 00:01:10.034 [394/720] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:10.034 [395/720] Linking static target lib/librte_pdump.a 00:01:10.034 [396/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:10.034 [397/720] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:10.034 [398/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:10.034 [399/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:10.034 [400/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:10.034 [401/720] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:10.034 [402/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:10.034 [403/720] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:10.034 [404/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:10.295 [405/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:10.295 [406/720] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:10.295 [407/720] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.295 [408/720] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:10.295 [409/720] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:10.295 [410/720] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.295 [411/720] Linking static target lib/librte_cryptodev.a 00:01:10.295 [412/720] Linking static target drivers/librte_bus_vdev.a 00:01:10.295 [413/720] Linking static target lib/librte_sched.a 00:01:10.295 [414/720] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:10.295 [415/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:10.295 [416/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:10.295 [417/720] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.295 [418/720] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:10.295 [419/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:10.295 [420/720] Linking static target lib/librte_table.a 00:01:10.295 [421/720] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:10.295 [422/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:10.295 [423/720] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:10.295 [424/720] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:10.295 [425/720] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.295 [426/720] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:10.295 [427/720] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:10.295 [428/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:10.295 [429/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:10.295 [430/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:10.295 [431/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:10.295 [432/720] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:10.295 [433/720] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:10.295 [434/720] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:10.295 [435/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:10.295 [436/720] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:10.295 [437/720] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:10.557 [438/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:10.557 [439/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:10.557 [440/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:10.557 [441/720] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:10.557 [442/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:10.557 [443/720] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:10.557 [444/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:10.557 [445/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:10.557 [446/720] Linking static target lib/librte_ipsec.a 00:01:10.557 [447/720] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:10.557 [448/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:10.557 [449/720] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.557 [450/720] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.557 [451/720] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:10.557 [452/720] Linking static target drivers/librte_bus_pci.a 00:01:10.557 [453/720] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:10.557 [454/720] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.557 [455/720] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:10.557 [456/720] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.557 [457/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:10.557 [458/720] Linking static target lib/librte_member.a 00:01:10.557 [459/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:10.557 [460/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:10.557 [461/720] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:10.825 [462/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:10.825 [463/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:10.825 [464/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:10.825 [465/720] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:10.825 [466/720] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:10.825 [467/720] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.825 [468/720] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:10.825 [469/720] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:10.825 [470/720] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.825 [471/720] Linking static target lib/librte_node.a 00:01:10.825 [472/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:10.825 [473/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:10.825 [474/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:10.825 [475/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:10.825 [476/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:10.825 [477/720] Linking static target lib/librte_pdcp.a 00:01:10.825 [478/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:11.086 [479/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:11.086 [480/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:11.086 [481/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:11.086 [482/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:11.086 [483/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:11.086 [484/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:11.086 [485/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:11.086 [486/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:11.086 [487/720] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.086 [488/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:11.086 [489/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:11.086 [490/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:11.086 [491/720] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.086 [492/720] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:11.086 [493/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:11.086 [494/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:11.086 [495/720] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:11.086 [496/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:11.086 [497/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:11.086 [498/720] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.086 [499/720] Linking static target lib/librte_hash.a 00:01:11.086 [500/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:11.086 [501/720] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.086 [502/720] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:11.086 [503/720] Linking static target drivers/librte_mempool_ring.a 00:01:11.086 [504/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:11.086 [505/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:11.086 [506/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:11.086 [507/720] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:11.086 [508/720] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:11.086 [509/720] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:11.086 [510/720] Linking static target lib/acl/libavx2_tmp.a 00:01:11.345 [511/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:11.345 [512/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:11.345 [513/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:11.345 [514/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:11.345 [515/720] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:11.345 [516/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:11.345 [517/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:11.345 [518/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:11.345 [519/720] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.345 [520/720] Linking static target lib/librte_port.a 00:01:11.345 [521/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:11.345 [522/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:11.345 [523/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:11.345 [524/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:11.345 [525/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:11.345 [526/720] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.345 [527/720] Linking static target lib/librte_eventdev.a 00:01:11.345 [528/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:11.345 [529/720] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.345 [530/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:11.345 [531/720] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.345 [532/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:11.345 [533/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:11.345 [534/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:11.345 [535/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:11.345 [536/720] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:11.603 [537/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:11.603 [538/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:11.603 [539/720] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:11.603 [540/720] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:11.603 [541/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:11.603 [542/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:11.603 [543/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:11.603 [544/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:11.603 [545/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:11.603 [546/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:11.603 [547/720] Linking static target lib/librte_acl.a 00:01:11.603 [548/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:11.603 [549/720] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:11.603 [550/720] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:11.603 [551/720] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:11.603 [552/720] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:11.603 [553/720] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:11.603 [554/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:11.861 [555/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:11.861 [556/720] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:11.861 [557/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:11.861 [558/720] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:11.861 [559/720] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:11.861 [560/720] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:11.861 [561/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:11.861 [562/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:11.861 [563/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:11.861 [564/720] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:11.861 [565/720] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.861 [566/720] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:11.861 [567/720] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:11.861 [568/720] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.861 [569/720] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:11.861 [570/720] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:11.861 [571/720] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.861 [572/720] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:11.861 [573/720] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:12.118 [574/720] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:12.118 [575/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:12.119 [576/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:12.119 [577/720] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:12.119 [578/720] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:12.119 [579/720] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.119 [580/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:12.119 [581/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:12.377 [582/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.377 [583/720] Linking static target lib/librte_ethdev.a 00:01:12.377 [584/720] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:12.377 [585/720] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:12.634 [586/720] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:12.634 [587/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:12.892 [588/720] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:12.892 [589/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:13.151 [590/720] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:13.410 [591/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:13.976 [592/720] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:13.976 [593/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:13.976 [594/720] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.976 [595/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:13.976 [596/720] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:14.235 [597/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:14.494 [598/720] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:14.494 [599/720] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.494 [600/720] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.494 [601/720] Linking static target drivers/librte_net_i40e.a 00:01:14.752 [602/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.320 [603/720] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.578 [604/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:15.837 [605/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:17.215 [606/720] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.215 [607/720] Linking target lib/librte_eal.so.24.2 00:01:17.215 [608/720] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:17.215 [609/720] Linking target lib/librte_ring.so.24.2 00:01:17.215 [610/720] Linking target lib/librte_timer.so.24.2 00:01:17.215 [611/720] Linking target lib/librte_cfgfile.so.24.2 00:01:17.215 [612/720] Linking target lib/librte_meter.so.24.2 00:01:17.215 [613/720] Linking target lib/librte_pci.so.24.2 00:01:17.215 [614/720] Linking target drivers/librte_bus_vdev.so.24.2 00:01:17.215 [615/720] Linking target lib/librte_jobstats.so.24.2 00:01:17.215 [616/720] Linking target lib/librte_stack.so.24.2 00:01:17.215 [617/720] Linking target lib/librte_rawdev.so.24.2 00:01:17.215 [618/720] Linking target lib/librte_dmadev.so.24.2 00:01:17.215 [619/720] Linking target lib/librte_acl.so.24.2 00:01:17.473 [620/720] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:17.473 [621/720] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:17.473 [622/720] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:17.473 [623/720] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:17.473 [624/720] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:17.473 [625/720] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:17.473 [626/720] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:17.473 [627/720] Linking target lib/librte_rcu.so.24.2 00:01:17.473 [628/720] Linking target drivers/librte_bus_pci.so.24.2 00:01:17.473 [629/720] Linking target lib/librte_mempool.so.24.2 00:01:17.473 [630/720] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:17.473 [631/720] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:17.732 [632/720] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:17.732 [633/720] Linking target lib/librte_mbuf.so.24.2 00:01:17.732 [634/720] Linking target drivers/librte_mempool_ring.so.24.2 00:01:17.732 [635/720] Linking target lib/librte_rib.so.24.2 00:01:17.732 [636/720] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:17.732 [637/720] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:17.732 [638/720] Linking target lib/librte_fib.so.24.2 00:01:17.732 [639/720] Linking target lib/librte_sched.so.24.2 00:01:17.732 [640/720] Linking target lib/librte_net.so.24.2 00:01:17.732 [641/720] Linking target lib/librte_gpudev.so.24.2 00:01:17.732 [642/720] Linking target lib/librte_bbdev.so.24.2 00:01:17.732 [643/720] Linking target lib/librte_reorder.so.24.2 00:01:17.732 [644/720] Linking target lib/librte_distributor.so.24.2 00:01:17.732 [645/720] Linking target lib/librte_regexdev.so.24.2 00:01:17.732 [646/720] Linking target lib/librte_compressdev.so.24.2 00:01:17.732 [647/720] Linking target lib/librte_mldev.so.24.2 00:01:17.732 [648/720] Linking target lib/librte_cryptodev.so.24.2 00:01:17.993 [649/720] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:17.993 [650/720] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:17.993 [651/720] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:17.993 [652/720] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:17.993 [653/720] Linking target lib/librte_cmdline.so.24.2 00:01:17.993 [654/720] Linking target lib/librte_security.so.24.2 00:01:17.993 [655/720] Linking target lib/librte_hash.so.24.2 00:01:17.993 [656/720] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:17.993 [657/720] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:18.252 [658/720] Linking target lib/librte_member.so.24.2 00:01:18.252 [659/720] Linking target lib/librte_pdcp.so.24.2 00:01:18.252 [660/720] Linking target lib/librte_efd.so.24.2 00:01:18.252 [661/720] Linking target lib/librte_lpm.so.24.2 00:01:18.252 [662/720] Linking target lib/librte_ipsec.so.24.2 00:01:18.252 [663/720] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:18.252 [664/720] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:19.187 [665/720] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.187 [666/720] Linking target lib/librte_ethdev.so.24.2 00:01:19.446 [667/720] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:19.446 [668/720] Linking target lib/librte_ip_frag.so.24.2 00:01:19.446 [669/720] Linking target lib/librte_gso.so.24.2 00:01:19.446 [670/720] Linking target lib/librte_gro.so.24.2 00:01:19.446 [671/720] Linking target lib/librte_metrics.so.24.2 00:01:19.446 [672/720] Linking target lib/librte_pcapng.so.24.2 00:01:19.446 [673/720] Linking target lib/librte_bpf.so.24.2 00:01:19.446 [674/720] Linking target lib/librte_power.so.24.2 00:01:19.446 [675/720] Linking target lib/librte_eventdev.so.24.2 00:01:19.446 [676/720] Linking target drivers/librte_net_i40e.so.24.2 00:01:19.446 [677/720] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:19.446 [678/720] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:19.446 [679/720] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:19.446 [680/720] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:19.705 [681/720] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:19.705 [682/720] Linking target lib/librte_bitratestats.so.24.2 00:01:19.705 [683/720] Linking target lib/librte_pdump.so.24.2 00:01:19.705 [684/720] Linking target lib/librte_latencystats.so.24.2 00:01:19.705 [685/720] Linking target lib/librte_dispatcher.so.24.2 00:01:19.705 [686/720] Linking target lib/librte_graph.so.24.2 00:01:19.705 [687/720] Linking target lib/librte_port.so.24.2 00:01:19.705 [688/720] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:19.705 [689/720] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:19.705 [690/720] Linking target lib/librte_node.so.24.2 00:01:19.705 [691/720] Linking target lib/librte_table.so.24.2 00:01:19.963 [692/720] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:22.496 [693/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:22.496 [694/720] Linking static target lib/librte_pipeline.a 00:01:23.064 [695/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:23.064 [696/720] Linking static target lib/librte_vhost.a 00:01:23.631 [697/720] Linking target app/dpdk-test-acl 00:01:23.631 [698/720] Linking target app/dpdk-test-dma-perf 00:01:23.631 [699/720] Linking target app/dpdk-test-crypto-perf 00:01:23.631 [700/720] Linking target app/dpdk-pdump 00:01:23.631 [701/720] Linking target app/dpdk-test-fib 00:01:23.631 [702/720] Linking target app/dpdk-test-bbdev 00:01:23.631 [703/720] Linking target app/dpdk-proc-info 00:01:23.631 [704/720] Linking target app/dpdk-test-compress-perf 00:01:23.631 [705/720] Linking target app/dpdk-test-sad 00:01:23.632 [706/720] Linking target app/dpdk-test-security-perf 00:01:23.632 [707/720] Linking target app/dpdk-test-pipeline 00:01:23.632 [708/720] Linking target app/dpdk-test-cmdline 00:01:23.632 [709/720] Linking target app/dpdk-test-gpudev 00:01:23.632 [710/720] Linking target app/dpdk-test-mldev 00:01:23.632 [711/720] Linking target app/dpdk-test-regex 00:01:23.632 [712/720] Linking target app/dpdk-test-flow-perf 00:01:23.632 [713/720] Linking target app/dpdk-dumpcap 00:01:23.632 [714/720] Linking target app/dpdk-graph 00:01:23.632 [715/720] Linking target app/dpdk-test-eventdev 00:01:23.632 [716/720] Linking target app/dpdk-testpmd 00:01:25.011 [717/720] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.011 [718/720] Linking target lib/librte_vhost.so.24.2 00:01:26.391 [719/720] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.650 [720/720] Linking target lib/librte_pipeline.so.24.2 00:01:26.650 19:07:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:01:26.650 19:07:37 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:26.650 19:07:37 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:01:26.650 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:26.650 [0/1] Installing files. 00:01:26.914 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:26.914 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:26.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:26.919 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.919 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:26.920 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.183 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.183 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.183 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.183 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:01:27.183 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.183 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.184 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.185 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:27.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:27.187 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:27.187 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:27.187 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:27.187 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:27.187 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:01:27.187 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:27.187 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:27.187 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:27.187 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:27.187 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:27.187 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:27.187 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:27.187 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:27.187 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:27.187 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:27.187 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:27.187 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:27.187 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:27.187 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:27.187 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:27.187 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:27.187 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:27.188 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:27.188 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:27.188 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:27.188 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:27.188 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:27.188 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:27.188 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:27.188 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:27.188 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:27.188 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:27.188 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:27.188 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:27.188 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:27.188 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:27.188 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:27.188 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:27.188 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:27.188 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:27.188 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:27.188 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:27.188 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:27.188 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:27.188 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:27.188 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:27.188 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:27.188 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:27.188 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:27.188 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:27.188 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:27.188 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:27.188 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:27.188 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:27.188 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:27.188 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:27.188 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:27.188 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:27.188 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:27.188 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:27.188 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:27.188 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:27.188 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:27.188 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:27.188 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:27.188 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:27.188 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:27.188 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:27.188 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:27.188 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:27.188 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:27.188 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:27.188 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:27.188 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:27.188 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:27.188 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:27.188 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:27.188 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:27.188 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:27.188 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:27.188 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:27.188 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:27.188 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:27.188 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:27.188 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:27.188 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:27.188 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:27.188 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:27.188 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:27.188 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:27.188 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:27.188 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:27.188 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:27.188 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:27.188 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:27.188 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:27.188 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:27.188 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:27.188 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:27.188 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:27.188 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:27.188 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:27.188 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:27.188 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:27.188 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:27.188 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:27.188 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:27.188 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:27.188 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:27.188 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:27.188 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:27.188 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:27.188 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:27.188 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:27.188 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:01:27.189 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:01:27.189 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:01:27.189 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:01:27.189 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:01:27.189 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:01:27.189 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:01:27.189 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:01:27.189 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:01:27.189 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:01:27.189 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:01:27.189 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:01:27.189 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:01:27.189 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:01:27.189 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:01:27.189 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:01:27.189 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:01:27.189 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:01:27.189 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:01:27.189 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:01:27.189 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:01:27.189 19:07:38 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:01:27.189 19:07:38 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.189 00:01:27.189 real 0m26.791s 00:01:27.189 user 8m39.279s 00:01:27.189 sys 1m59.619s 00:01:27.189 19:07:38 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.189 19:07:38 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:27.189 ************************************ 00:01:27.189 END TEST build_native_dpdk 00:01:27.189 ************************************ 00:01:27.447 19:07:38 -- common/autotest_common.sh@1142 -- $ return 0 00:01:27.447 19:07:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.447 19:07:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.447 19:07:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:27.447 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:27.447 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.447 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.706 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:27.965 Using 'verbs' RDMA provider 00:01:41.124 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.156 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:51.725 Creating mk/config.mk...done. 00:01:51.725 Creating mk/cc.flags.mk...done. 00:01:51.725 Type 'make' to build. 00:01:51.725 19:08:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:51.725 19:08:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:51.725 19:08:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.725 19:08:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.725 ************************************ 00:01:51.725 START TEST make 00:01:51.725 ************************************ 00:01:51.725 19:08:02 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:51.983 make[1]: Nothing to be done for 'all'. 00:01:53.366 The Meson build system 00:01:53.366 Version: 1.3.1 00:01:53.366 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:53.366 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.366 Build type: native build 00:01:53.366 Project name: libvfio-user 00:01:53.366 Project version: 0.0.1 00:01:53.366 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.366 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:53.366 Host machine cpu family: x86_64 00:01:53.366 Host machine cpu: x86_64 00:01:53.366 Run-time dependency threads found: YES 00:01:53.366 Library dl found: YES 00:01:53.366 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.366 Run-time dependency json-c found: YES 0.17 00:01:53.366 Run-time dependency cmocka found: YES 1.1.7 00:01:53.366 Program pytest-3 found: NO 00:01:53.366 Program flake8 found: NO 00:01:53.366 Program misspell-fixer found: NO 00:01:53.366 Program restructuredtext-lint found: NO 00:01:53.366 Program valgrind found: YES (/usr/bin/valgrind) 00:01:53.366 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.366 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.366 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.366 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.366 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:53.366 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:53.366 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.366 Build targets in project: 8 00:01:53.366 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:53.366 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:53.366 00:01:53.366 libvfio-user 0.0.1 00:01:53.366 00:01:53.366 User defined options 00:01:53.366 buildtype : debug 00:01:53.366 default_library: shared 00:01:53.366 libdir : /usr/local/lib 00:01:53.366 00:01:53.366 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.624 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.624 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:53.624 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:53.624 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:53.624 [4/37] Compiling C object samples/null.p/null.c.o 00:01:53.624 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:53.882 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:53.882 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:53.882 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:53.882 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:53.882 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:53.882 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:53.882 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:53.882 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:53.882 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:53.882 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:53.882 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:53.882 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:53.882 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:53.882 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:53.882 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:53.882 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:53.882 [22/37] Compiling C object samples/server.p/server.c.o 00:01:53.882 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:53.882 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:53.882 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:53.882 [26/37] Compiling C object samples/client.p/client.c.o 00:01:53.882 [27/37] Linking target samples/client 00:01:53.882 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:53.882 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:53.882 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:53.882 [31/37] Linking target test/unit_tests 00:01:54.141 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.141 [33/37] Linking target samples/null 00:01:54.141 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:54.141 [35/37] Linking target samples/gpio-pci-idio-16 00:01:54.141 [36/37] Linking target samples/lspci 00:01:54.141 [37/37] Linking target samples/server 00:01:54.141 INFO: autodetecting backend as ninja 00:01:54.141 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.141 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.709 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.709 ninja: no work to do. 00:02:02.898 CC lib/log/log.o 00:02:02.898 CC lib/log/log_flags.o 00:02:02.898 CC lib/log/log_deprecated.o 00:02:02.898 CC lib/ut_mock/mock.o 00:02:02.898 CC lib/ut/ut.o 00:02:02.898 LIB libspdk_log.a 00:02:02.898 LIB libspdk_ut.a 00:02:02.898 LIB libspdk_ut_mock.a 00:02:02.898 SO libspdk_ut.so.2.0 00:02:02.898 SO libspdk_log.so.7.0 00:02:02.898 SO libspdk_ut_mock.so.6.0 00:02:02.898 SYMLINK libspdk_ut.so 00:02:02.898 SYMLINK libspdk_log.so 00:02:02.898 SYMLINK libspdk_ut_mock.so 00:02:02.898 CC lib/ioat/ioat.o 00:02:02.898 CC lib/dma/dma.o 00:02:02.898 CXX lib/trace_parser/trace.o 00:02:02.898 CC lib/util/base64.o 00:02:02.898 CC lib/util/bit_array.o 00:02:02.898 CC lib/util/cpuset.o 00:02:02.898 CC lib/util/crc32.o 00:02:02.898 CC lib/util/crc16.o 00:02:02.898 CC lib/util/crc32c.o 00:02:02.898 CC lib/util/crc32_ieee.o 00:02:02.898 CC lib/util/crc64.o 00:02:02.898 CC lib/util/dif.o 00:02:02.898 CC lib/util/fd.o 00:02:02.898 CC lib/util/hexlify.o 00:02:02.898 CC lib/util/file.o 00:02:02.898 CC lib/util/iov.o 00:02:02.898 CC lib/util/math.o 00:02:02.898 CC lib/util/pipe.o 00:02:02.898 CC lib/util/strerror_tls.o 00:02:02.898 CC lib/util/string.o 00:02:02.898 CC lib/util/uuid.o 00:02:02.898 CC lib/util/fd_group.o 00:02:02.898 CC lib/util/xor.o 00:02:02.898 CC lib/util/zipf.o 00:02:03.156 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.156 CC lib/vfio_user/host/vfio_user.o 00:02:03.156 LIB libspdk_dma.a 00:02:03.156 SO libspdk_dma.so.4.0 00:02:03.156 LIB libspdk_ioat.a 00:02:03.156 SYMLINK libspdk_dma.so 00:02:03.156 SO libspdk_ioat.so.7.0 00:02:03.414 SYMLINK libspdk_ioat.so 00:02:03.414 LIB libspdk_vfio_user.a 00:02:03.414 SO libspdk_vfio_user.so.5.0 00:02:03.414 LIB libspdk_util.a 00:02:03.414 SYMLINK libspdk_vfio_user.so 00:02:03.414 SO libspdk_util.so.9.1 00:02:03.673 SYMLINK libspdk_util.so 00:02:03.673 LIB libspdk_trace_parser.a 00:02:03.673 SO libspdk_trace_parser.so.5.0 00:02:03.673 SYMLINK libspdk_trace_parser.so 00:02:03.931 CC lib/rdma_utils/rdma_utils.o 00:02:03.931 CC lib/idxd/idxd_user.o 00:02:03.931 CC lib/idxd/idxd_kernel.o 00:02:03.931 CC lib/idxd/idxd.o 00:02:03.931 CC lib/vmd/vmd.o 00:02:03.931 CC lib/conf/conf.o 00:02:03.931 CC lib/vmd/led.o 00:02:03.931 CC lib/env_dpdk/memory.o 00:02:03.931 CC lib/env_dpdk/env.o 00:02:03.931 CC lib/rdma_provider/common.o 00:02:03.931 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:03.931 CC lib/env_dpdk/pci.o 00:02:03.931 CC lib/env_dpdk/init.o 00:02:03.931 CC lib/env_dpdk/threads.o 00:02:03.931 CC lib/env_dpdk/pci_ioat.o 00:02:03.931 CC lib/env_dpdk/pci_virtio.o 00:02:03.931 CC lib/json/json_parse.o 00:02:03.931 CC lib/env_dpdk/pci_idxd.o 00:02:03.931 CC lib/env_dpdk/pci_vmd.o 00:02:03.931 CC lib/json/json_util.o 00:02:03.931 CC lib/json/json_write.o 00:02:03.931 CC lib/env_dpdk/pci_event.o 00:02:03.931 CC lib/env_dpdk/sigbus_handler.o 00:02:03.931 CC lib/env_dpdk/pci_dpdk.o 00:02:03.931 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:03.931 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.190 LIB libspdk_rdma_provider.a 00:02:04.190 LIB libspdk_conf.a 00:02:04.190 LIB libspdk_rdma_utils.a 00:02:04.190 SO libspdk_rdma_provider.so.6.0 00:02:04.190 SO libspdk_conf.so.6.0 00:02:04.190 SO libspdk_rdma_utils.so.1.0 00:02:04.190 LIB libspdk_json.a 00:02:04.190 SYMLINK libspdk_rdma_provider.so 00:02:04.190 SYMLINK libspdk_conf.so 00:02:04.190 SYMLINK libspdk_rdma_utils.so 00:02:04.190 SO libspdk_json.so.6.0 00:02:04.190 SYMLINK libspdk_json.so 00:02:04.449 LIB libspdk_idxd.a 00:02:04.449 SO libspdk_idxd.so.12.0 00:02:04.449 LIB libspdk_vmd.a 00:02:04.449 SO libspdk_vmd.so.6.0 00:02:04.449 SYMLINK libspdk_idxd.so 00:02:04.449 SYMLINK libspdk_vmd.so 00:02:04.449 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.449 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.449 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:04.449 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.708 LIB libspdk_jsonrpc.a 00:02:04.708 SO libspdk_jsonrpc.so.6.0 00:02:04.967 SYMLINK libspdk_jsonrpc.so 00:02:04.967 LIB libspdk_env_dpdk.a 00:02:04.967 SO libspdk_env_dpdk.so.14.1 00:02:05.227 SYMLINK libspdk_env_dpdk.so 00:02:05.227 CC lib/rpc/rpc.o 00:02:05.227 LIB libspdk_rpc.a 00:02:05.227 SO libspdk_rpc.so.6.0 00:02:05.486 SYMLINK libspdk_rpc.so 00:02:05.769 CC lib/keyring/keyring.o 00:02:05.769 CC lib/keyring/keyring_rpc.o 00:02:05.769 CC lib/notify/notify.o 00:02:05.769 CC lib/notify/notify_rpc.o 00:02:05.769 CC lib/trace/trace.o 00:02:05.769 CC lib/trace/trace_rpc.o 00:02:05.769 CC lib/trace/trace_flags.o 00:02:05.769 LIB libspdk_notify.a 00:02:06.029 LIB libspdk_keyring.a 00:02:06.029 SO libspdk_notify.so.6.0 00:02:06.029 SO libspdk_keyring.so.1.0 00:02:06.029 LIB libspdk_trace.a 00:02:06.029 SYMLINK libspdk_notify.so 00:02:06.029 SO libspdk_trace.so.10.0 00:02:06.029 SYMLINK libspdk_keyring.so 00:02:06.029 SYMLINK libspdk_trace.so 00:02:06.288 CC lib/sock/sock.o 00:02:06.288 CC lib/sock/sock_rpc.o 00:02:06.288 CC lib/thread/thread.o 00:02:06.288 CC lib/thread/iobuf.o 00:02:06.547 LIB libspdk_sock.a 00:02:06.547 SO libspdk_sock.so.10.0 00:02:06.806 SYMLINK libspdk_sock.so 00:02:07.066 CC lib/nvme/nvme_fabric.o 00:02:07.066 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.066 CC lib/nvme/nvme_ns_cmd.o 00:02:07.066 CC lib/nvme/nvme_ctrlr.o 00:02:07.066 CC lib/nvme/nvme_ns.o 00:02:07.066 CC lib/nvme/nvme_pcie_common.o 00:02:07.066 CC lib/nvme/nvme_pcie.o 00:02:07.066 CC lib/nvme/nvme_qpair.o 00:02:07.066 CC lib/nvme/nvme.o 00:02:07.066 CC lib/nvme/nvme_transport.o 00:02:07.066 CC lib/nvme/nvme_discovery.o 00:02:07.066 CC lib/nvme/nvme_quirks.o 00:02:07.067 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.067 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.067 CC lib/nvme/nvme_tcp.o 00:02:07.067 CC lib/nvme/nvme_opal.o 00:02:07.067 CC lib/nvme/nvme_io_msg.o 00:02:07.067 CC lib/nvme/nvme_poll_group.o 00:02:07.067 CC lib/nvme/nvme_zns.o 00:02:07.067 CC lib/nvme/nvme_stubs.o 00:02:07.067 CC lib/nvme/nvme_auth.o 00:02:07.067 CC lib/nvme/nvme_rdma.o 00:02:07.067 CC lib/nvme/nvme_cuse.o 00:02:07.067 CC lib/nvme/nvme_vfio_user.o 00:02:07.325 LIB libspdk_thread.a 00:02:07.325 SO libspdk_thread.so.10.1 00:02:07.584 SYMLINK libspdk_thread.so 00:02:07.843 CC lib/virtio/virtio.o 00:02:07.843 CC lib/virtio/virtio_vhost_user.o 00:02:07.843 CC lib/virtio/virtio_vfio_user.o 00:02:07.843 CC lib/virtio/virtio_pci.o 00:02:07.843 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.843 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.843 CC lib/blob/blobstore.o 00:02:07.843 CC lib/blob/zeroes.o 00:02:07.843 CC lib/blob/request.o 00:02:07.843 CC lib/blob/blob_bs_dev.o 00:02:07.843 CC lib/init/json_config.o 00:02:07.843 CC lib/init/subsystem.o 00:02:07.843 CC lib/init/subsystem_rpc.o 00:02:07.843 CC lib/accel/accel.o 00:02:07.843 CC lib/init/rpc.o 00:02:07.843 CC lib/accel/accel_rpc.o 00:02:07.843 CC lib/accel/accel_sw.o 00:02:08.102 LIB libspdk_init.a 00:02:08.102 LIB libspdk_virtio.a 00:02:08.102 LIB libspdk_vfu_tgt.a 00:02:08.102 SO libspdk_init.so.5.0 00:02:08.102 SO libspdk_virtio.so.7.0 00:02:08.102 SO libspdk_vfu_tgt.so.3.0 00:02:08.102 SYMLINK libspdk_init.so 00:02:08.102 SYMLINK libspdk_virtio.so 00:02:08.102 SYMLINK libspdk_vfu_tgt.so 00:02:08.361 CC lib/event/app.o 00:02:08.361 CC lib/event/reactor.o 00:02:08.361 CC lib/event/log_rpc.o 00:02:08.361 CC lib/event/app_rpc.o 00:02:08.361 CC lib/event/scheduler_static.o 00:02:08.621 LIB libspdk_accel.a 00:02:08.621 SO libspdk_accel.so.15.1 00:02:08.621 LIB libspdk_nvme.a 00:02:08.621 SYMLINK libspdk_accel.so 00:02:08.621 SO libspdk_nvme.so.13.1 00:02:08.621 LIB libspdk_event.a 00:02:08.621 SO libspdk_event.so.14.0 00:02:08.881 SYMLINK libspdk_event.so 00:02:08.881 CC lib/bdev/bdev.o 00:02:08.881 CC lib/bdev/bdev_zone.o 00:02:08.881 CC lib/bdev/bdev_rpc.o 00:02:08.881 CC lib/bdev/scsi_nvme.o 00:02:08.881 CC lib/bdev/part.o 00:02:08.881 SYMLINK libspdk_nvme.so 00:02:09.819 LIB libspdk_blob.a 00:02:09.819 SO libspdk_blob.so.11.0 00:02:10.078 SYMLINK libspdk_blob.so 00:02:10.338 CC lib/blobfs/blobfs.o 00:02:10.338 CC lib/blobfs/tree.o 00:02:10.338 CC lib/lvol/lvol.o 00:02:10.597 LIB libspdk_bdev.a 00:02:10.597 SO libspdk_bdev.so.15.1 00:02:10.856 SYMLINK libspdk_bdev.so 00:02:10.856 LIB libspdk_blobfs.a 00:02:10.856 SO libspdk_blobfs.so.10.0 00:02:10.856 LIB libspdk_lvol.a 00:02:10.856 SYMLINK libspdk_blobfs.so 00:02:10.856 SO libspdk_lvol.so.10.0 00:02:11.117 SYMLINK libspdk_lvol.so 00:02:11.117 CC lib/ftl/ftl_core.o 00:02:11.117 CC lib/ftl/ftl_layout.o 00:02:11.117 CC lib/ftl/ftl_init.o 00:02:11.117 CC lib/ftl/ftl_io.o 00:02:11.117 CC lib/ftl/ftl_debug.o 00:02:11.117 CC lib/ftl/ftl_l2p.o 00:02:11.117 CC lib/ftl/ftl_l2p_flat.o 00:02:11.117 CC lib/ftl/ftl_nv_cache.o 00:02:11.117 CC lib/ftl/ftl_sb.o 00:02:11.117 CC lib/ftl/ftl_band.o 00:02:11.117 CC lib/ftl/ftl_band_ops.o 00:02:11.117 CC lib/ftl/ftl_writer.o 00:02:11.117 CC lib/ftl/ftl_rq.o 00:02:11.117 CC lib/ftl/ftl_reloc.o 00:02:11.117 CC lib/ftl/ftl_l2p_cache.o 00:02:11.117 CC lib/ftl/ftl_p2l.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.117 CC lib/nbd/nbd.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.117 CC lib/nbd/nbd_rpc.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.117 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.117 CC lib/nvmf/ctrlr_discovery.o 00:02:11.117 CC lib/nvmf/ctrlr.o 00:02:11.117 CC lib/ftl/utils/ftl_conf.o 00:02:11.117 CC lib/ftl/utils/ftl_md.o 00:02:11.117 CC lib/nvmf/subsystem.o 00:02:11.117 CC lib/nvmf/ctrlr_bdev.o 00:02:11.117 CC lib/ftl/utils/ftl_mempool.o 00:02:11.117 CC lib/nvmf/nvmf.o 00:02:11.117 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.117 CC lib/nvmf/nvmf_rpc.o 00:02:11.117 CC lib/scsi/dev.o 00:02:11.117 CC lib/scsi/lun.o 00:02:11.117 CC lib/ftl/utils/ftl_property.o 00:02:11.117 CC lib/scsi/scsi.o 00:02:11.117 CC lib/scsi/port.o 00:02:11.117 CC lib/nvmf/transport.o 00:02:11.117 CC lib/ublk/ublk.o 00:02:11.117 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.117 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.117 CC lib/nvmf/tcp.o 00:02:11.117 CC lib/ublk/ublk_rpc.o 00:02:11.117 CC lib/scsi/scsi_pr.o 00:02:11.117 CC lib/scsi/scsi_bdev.o 00:02:11.117 CC lib/nvmf/stubs.o 00:02:11.117 CC lib/scsi/scsi_rpc.o 00:02:11.117 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.117 CC lib/nvmf/mdns_server.o 00:02:11.117 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.117 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.117 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.117 CC lib/nvmf/vfio_user.o 00:02:11.117 CC lib/nvmf/rdma.o 00:02:11.117 CC lib/scsi/task.o 00:02:11.117 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.117 CC lib/nvmf/auth.o 00:02:11.117 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.117 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.117 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.117 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.117 CC lib/ftl/base/ftl_base_dev.o 00:02:11.117 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.117 CC lib/ftl/ftl_trace.o 00:02:11.684 LIB libspdk_nbd.a 00:02:11.684 SO libspdk_nbd.so.7.0 00:02:11.684 LIB libspdk_scsi.a 00:02:11.684 SO libspdk_scsi.so.9.0 00:02:11.684 SYMLINK libspdk_nbd.so 00:02:11.684 LIB libspdk_ublk.a 00:02:11.942 SYMLINK libspdk_scsi.so 00:02:11.942 SO libspdk_ublk.so.3.0 00:02:11.942 SYMLINK libspdk_ublk.so 00:02:11.943 LIB libspdk_ftl.a 00:02:12.202 CC lib/iscsi/iscsi.o 00:02:12.202 CC lib/iscsi/conn.o 00:02:12.202 CC lib/iscsi/init_grp.o 00:02:12.202 CC lib/iscsi/md5.o 00:02:12.202 CC lib/iscsi/param.o 00:02:12.202 CC lib/iscsi/portal_grp.o 00:02:12.202 CC lib/iscsi/tgt_node.o 00:02:12.202 CC lib/iscsi/iscsi_subsystem.o 00:02:12.202 CC lib/iscsi/iscsi_rpc.o 00:02:12.202 CC lib/iscsi/task.o 00:02:12.202 CC lib/vhost/vhost.o 00:02:12.202 CC lib/vhost/vhost_rpc.o 00:02:12.202 CC lib/vhost/vhost_blk.o 00:02:12.202 CC lib/vhost/vhost_scsi.o 00:02:12.202 CC lib/vhost/rte_vhost_user.o 00:02:12.202 SO libspdk_ftl.so.9.0 00:02:12.460 SYMLINK libspdk_ftl.so 00:02:13.027 LIB libspdk_vhost.a 00:02:13.027 LIB libspdk_nvmf.a 00:02:13.027 SO libspdk_vhost.so.8.0 00:02:13.027 SO libspdk_nvmf.so.19.0 00:02:13.027 SYMLINK libspdk_vhost.so 00:02:13.027 LIB libspdk_iscsi.a 00:02:13.027 SYMLINK libspdk_nvmf.so 00:02:13.027 SO libspdk_iscsi.so.8.0 00:02:13.287 SYMLINK libspdk_iscsi.so 00:02:13.851 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.851 CC module/vfu_device/vfu_virtio.o 00:02:13.851 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.851 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.851 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.851 CC module/accel/dsa/accel_dsa.o 00:02:13.851 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.851 CC module/sock/posix/posix.o 00:02:13.851 LIB libspdk_env_dpdk_rpc.a 00:02:13.851 CC module/keyring/file/keyring_rpc.o 00:02:13.851 CC module/keyring/file/keyring.o 00:02:13.851 CC module/accel/error/accel_error.o 00:02:13.851 CC module/accel/error/accel_error_rpc.o 00:02:13.851 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.851 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.851 CC module/keyring/linux/keyring.o 00:02:13.851 CC module/keyring/linux/keyring_rpc.o 00:02:13.851 CC module/blob/bdev/blob_bdev.o 00:02:13.851 CC module/accel/iaa/accel_iaa.o 00:02:13.851 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.851 CC module/accel/ioat/accel_ioat.o 00:02:13.851 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.851 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.851 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.851 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.109 LIB libspdk_keyring_file.a 00:02:14.109 LIB libspdk_scheduler_gscheduler.a 00:02:14.109 LIB libspdk_keyring_linux.a 00:02:14.109 LIB libspdk_accel_error.a 00:02:14.109 SO libspdk_keyring_file.so.1.0 00:02:14.109 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.109 SO libspdk_keyring_linux.so.1.0 00:02:14.109 SO libspdk_accel_error.so.2.0 00:02:14.109 SO libspdk_scheduler_gscheduler.so.4.0 00:02:14.109 LIB libspdk_scheduler_dynamic.a 00:02:14.109 LIB libspdk_accel_ioat.a 00:02:14.109 LIB libspdk_accel_dsa.a 00:02:14.109 LIB libspdk_accel_iaa.a 00:02:14.109 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:14.109 SYMLINK libspdk_keyring_file.so 00:02:14.109 SO libspdk_scheduler_dynamic.so.4.0 00:02:14.109 SO libspdk_accel_iaa.so.3.0 00:02:14.109 SYMLINK libspdk_accel_error.so 00:02:14.109 SO libspdk_accel_ioat.so.6.0 00:02:14.109 SO libspdk_accel_dsa.so.5.0 00:02:14.109 SYMLINK libspdk_keyring_linux.so 00:02:14.109 SYMLINK libspdk_scheduler_gscheduler.so 00:02:14.109 LIB libspdk_blob_bdev.a 00:02:14.109 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:14.109 SO libspdk_blob_bdev.so.11.0 00:02:14.109 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.109 SYMLINK libspdk_accel_iaa.so 00:02:14.109 SYMLINK libspdk_accel_dsa.so 00:02:14.109 SYMLINK libspdk_accel_ioat.so 00:02:14.109 SYMLINK libspdk_blob_bdev.so 00:02:14.109 LIB libspdk_vfu_device.a 00:02:14.368 SO libspdk_vfu_device.so.3.0 00:02:14.368 SYMLINK libspdk_vfu_device.so 00:02:14.368 LIB libspdk_sock_posix.a 00:02:14.368 SO libspdk_sock_posix.so.6.0 00:02:14.627 SYMLINK libspdk_sock_posix.so 00:02:14.627 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.627 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.627 CC module/bdev/gpt/gpt.o 00:02:14.627 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.627 CC module/bdev/error/vbdev_error.o 00:02:14.627 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.627 CC module/bdev/nvme/bdev_nvme.o 00:02:14.627 CC module/bdev/nvme/nvme_rpc.o 00:02:14.627 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.627 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.627 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.627 CC module/bdev/nvme/vbdev_opal.o 00:02:14.627 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.627 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.627 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.627 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.627 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.627 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.627 CC module/bdev/malloc/bdev_malloc.o 00:02:14.627 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.627 CC module/bdev/null/bdev_null_rpc.o 00:02:14.627 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.627 CC module/bdev/null/bdev_null.o 00:02:14.627 CC module/bdev/delay/vbdev_delay.o 00:02:14.627 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.627 CC module/bdev/ftl/bdev_ftl.o 00:02:14.627 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.627 CC module/bdev/raid/bdev_raid.o 00:02:14.627 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.627 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.627 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.627 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.627 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.627 CC module/bdev/raid/raid0.o 00:02:14.627 CC module/bdev/raid/raid1.o 00:02:14.627 CC module/bdev/raid/concat.o 00:02:14.628 CC module/bdev/aio/bdev_aio.o 00:02:14.628 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.628 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.628 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.628 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.628 CC module/bdev/split/vbdev_split.o 00:02:14.887 LIB libspdk_blobfs_bdev.a 00:02:14.887 SO libspdk_blobfs_bdev.so.6.0 00:02:14.887 LIB libspdk_bdev_error.a 00:02:14.887 LIB libspdk_bdev_split.a 00:02:14.887 SYMLINK libspdk_blobfs_bdev.so 00:02:14.887 SO libspdk_bdev_error.so.6.0 00:02:14.887 SO libspdk_bdev_split.so.6.0 00:02:14.887 LIB libspdk_bdev_null.a 00:02:14.887 LIB libspdk_bdev_gpt.a 00:02:14.887 LIB libspdk_bdev_ftl.a 00:02:15.146 SO libspdk_bdev_null.so.6.0 00:02:15.146 LIB libspdk_bdev_passthru.a 00:02:15.146 SO libspdk_bdev_gpt.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_error.so 00:02:15.146 LIB libspdk_bdev_aio.a 00:02:15.146 LIB libspdk_bdev_zone_block.a 00:02:15.146 SO libspdk_bdev_ftl.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_split.so 00:02:15.146 LIB libspdk_bdev_malloc.a 00:02:15.146 LIB libspdk_bdev_iscsi.a 00:02:15.146 SO libspdk_bdev_passthru.so.6.0 00:02:15.146 SO libspdk_bdev_zone_block.so.6.0 00:02:15.146 LIB libspdk_bdev_delay.a 00:02:15.146 SYMLINK libspdk_bdev_null.so 00:02:15.146 SO libspdk_bdev_aio.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_gpt.so 00:02:15.146 SO libspdk_bdev_malloc.so.6.0 00:02:15.146 SO libspdk_bdev_delay.so.6.0 00:02:15.146 SO libspdk_bdev_iscsi.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_ftl.so 00:02:15.146 SYMLINK libspdk_bdev_zone_block.so 00:02:15.146 SYMLINK libspdk_bdev_passthru.so 00:02:15.146 SYMLINK libspdk_bdev_delay.so 00:02:15.146 SYMLINK libspdk_bdev_aio.so 00:02:15.146 SYMLINK libspdk_bdev_iscsi.so 00:02:15.146 SYMLINK libspdk_bdev_malloc.so 00:02:15.146 LIB libspdk_bdev_virtio.a 00:02:15.146 LIB libspdk_bdev_lvol.a 00:02:15.146 SO libspdk_bdev_virtio.so.6.0 00:02:15.146 SO libspdk_bdev_lvol.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_virtio.so 00:02:15.404 SYMLINK libspdk_bdev_lvol.so 00:02:15.404 LIB libspdk_bdev_raid.a 00:02:15.404 SO libspdk_bdev_raid.so.6.0 00:02:15.662 SYMLINK libspdk_bdev_raid.so 00:02:16.229 LIB libspdk_bdev_nvme.a 00:02:16.230 SO libspdk_bdev_nvme.so.7.0 00:02:16.488 SYMLINK libspdk_bdev_nvme.so 00:02:17.055 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.055 CC module/event/subsystems/vmd/vmd.o 00:02:17.055 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.055 CC module/event/subsystems/keyring/keyring.o 00:02:17.055 CC module/event/subsystems/sock/sock.o 00:02:17.055 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:17.055 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.055 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.055 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.055 LIB libspdk_event_scheduler.a 00:02:17.055 LIB libspdk_event_keyring.a 00:02:17.055 LIB libspdk_event_sock.a 00:02:17.055 SO libspdk_event_scheduler.so.4.0 00:02:17.055 LIB libspdk_event_vmd.a 00:02:17.055 LIB libspdk_event_vhost_blk.a 00:02:17.314 LIB libspdk_event_vfu_tgt.a 00:02:17.314 SO libspdk_event_sock.so.5.0 00:02:17.314 SO libspdk_event_keyring.so.1.0 00:02:17.314 SO libspdk_event_vmd.so.6.0 00:02:17.314 LIB libspdk_event_iobuf.a 00:02:17.314 SO libspdk_event_vhost_blk.so.3.0 00:02:17.314 SYMLINK libspdk_event_scheduler.so 00:02:17.314 SO libspdk_event_vfu_tgt.so.3.0 00:02:17.314 SYMLINK libspdk_event_sock.so 00:02:17.314 SO libspdk_event_iobuf.so.3.0 00:02:17.314 SYMLINK libspdk_event_keyring.so 00:02:17.314 SYMLINK libspdk_event_vhost_blk.so 00:02:17.314 SYMLINK libspdk_event_vmd.so 00:02:17.314 SYMLINK libspdk_event_vfu_tgt.so 00:02:17.314 SYMLINK libspdk_event_iobuf.so 00:02:17.607 CC module/event/subsystems/accel/accel.o 00:02:17.866 LIB libspdk_event_accel.a 00:02:17.866 SO libspdk_event_accel.so.6.0 00:02:17.866 SYMLINK libspdk_event_accel.so 00:02:18.124 CC module/event/subsystems/bdev/bdev.o 00:02:18.383 LIB libspdk_event_bdev.a 00:02:18.383 SO libspdk_event_bdev.so.6.0 00:02:18.383 SYMLINK libspdk_event_bdev.so 00:02:18.642 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.642 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.642 CC module/event/subsystems/ublk/ublk.o 00:02:18.642 CC module/event/subsystems/scsi/scsi.o 00:02:18.642 CC module/event/subsystems/nbd/nbd.o 00:02:18.900 LIB libspdk_event_nvmf.a 00:02:18.900 LIB libspdk_event_ublk.a 00:02:18.900 LIB libspdk_event_nbd.a 00:02:18.901 SO libspdk_event_nvmf.so.6.0 00:02:18.901 LIB libspdk_event_scsi.a 00:02:18.901 SO libspdk_event_ublk.so.3.0 00:02:18.901 SO libspdk_event_nbd.so.6.0 00:02:18.901 SO libspdk_event_scsi.so.6.0 00:02:18.901 SYMLINK libspdk_event_ublk.so 00:02:18.901 SYMLINK libspdk_event_nvmf.so 00:02:18.901 SYMLINK libspdk_event_nbd.so 00:02:18.901 SYMLINK libspdk_event_scsi.so 00:02:19.158 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.158 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.416 LIB libspdk_event_iscsi.a 00:02:19.416 LIB libspdk_event_vhost_scsi.a 00:02:19.416 SO libspdk_event_iscsi.so.6.0 00:02:19.416 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.416 SYMLINK libspdk_event_iscsi.so 00:02:19.416 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.673 SO libspdk.so.6.0 00:02:19.673 SYMLINK libspdk.so 00:02:19.932 CC app/spdk_nvme_perf/perf.o 00:02:19.932 CXX app/trace/trace.o 00:02:19.932 CC app/spdk_top/spdk_top.o 00:02:19.932 CC app/trace_record/trace_record.o 00:02:19.932 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.932 TEST_HEADER include/spdk/accel.h 00:02:19.932 CC app/spdk_nvme_identify/identify.o 00:02:19.932 TEST_HEADER include/spdk/accel_module.h 00:02:19.932 TEST_HEADER include/spdk/assert.h 00:02:19.932 TEST_HEADER include/spdk/bdev.h 00:02:19.932 TEST_HEADER include/spdk/barrier.h 00:02:19.932 TEST_HEADER include/spdk/base64.h 00:02:19.932 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.932 TEST_HEADER include/spdk/bit_array.h 00:02:19.932 TEST_HEADER include/spdk/bdev_module.h 00:02:19.932 TEST_HEADER include/spdk/bit_pool.h 00:02:19.932 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.932 TEST_HEADER include/spdk/blobfs.h 00:02:19.932 CC test/rpc_client/rpc_client_test.o 00:02:19.932 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.932 CC app/spdk_lspci/spdk_lspci.o 00:02:19.932 TEST_HEADER include/spdk/conf.h 00:02:19.932 TEST_HEADER include/spdk/blob.h 00:02:19.932 TEST_HEADER include/spdk/config.h 00:02:19.932 TEST_HEADER include/spdk/cpuset.h 00:02:19.932 TEST_HEADER include/spdk/crc32.h 00:02:19.932 TEST_HEADER include/spdk/crc16.h 00:02:19.932 TEST_HEADER include/spdk/dif.h 00:02:19.932 TEST_HEADER include/spdk/crc64.h 00:02:19.932 TEST_HEADER include/spdk/dma.h 00:02:19.932 TEST_HEADER include/spdk/endian.h 00:02:19.932 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.932 TEST_HEADER include/spdk/env.h 00:02:19.932 TEST_HEADER include/spdk/event.h 00:02:19.932 TEST_HEADER include/spdk/fd_group.h 00:02:19.932 TEST_HEADER include/spdk/file.h 00:02:19.932 TEST_HEADER include/spdk/ftl.h 00:02:19.932 TEST_HEADER include/spdk/fd.h 00:02:19.932 TEST_HEADER include/spdk/hexlify.h 00:02:19.932 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.932 TEST_HEADER include/spdk/histogram_data.h 00:02:19.932 TEST_HEADER include/spdk/idxd.h 00:02:19.932 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.932 TEST_HEADER include/spdk/init.h 00:02:19.932 TEST_HEADER include/spdk/ioat.h 00:02:19.932 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.932 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.932 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.932 TEST_HEADER include/spdk/json.h 00:02:19.932 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.932 TEST_HEADER include/spdk/keyring.h 00:02:19.932 TEST_HEADER include/spdk/keyring_module.h 00:02:19.932 TEST_HEADER include/spdk/likely.h 00:02:19.932 TEST_HEADER include/spdk/log.h 00:02:19.932 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.932 TEST_HEADER include/spdk/lvol.h 00:02:19.932 TEST_HEADER include/spdk/memory.h 00:02:19.932 TEST_HEADER include/spdk/nbd.h 00:02:19.932 TEST_HEADER include/spdk/mmio.h 00:02:19.932 TEST_HEADER include/spdk/notify.h 00:02:19.932 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.932 TEST_HEADER include/spdk/nvme.h 00:02:19.932 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.932 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.932 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.932 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.932 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.932 CC app/nvmf_tgt/nvmf_main.o 00:02:19.932 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.932 TEST_HEADER include/spdk/nvmf.h 00:02:19.932 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.932 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.932 TEST_HEADER include/spdk/opal.h 00:02:19.932 TEST_HEADER include/spdk/pci_ids.h 00:02:19.932 TEST_HEADER include/spdk/pipe.h 00:02:19.932 TEST_HEADER include/spdk/opal_spec.h 00:02:19.932 TEST_HEADER include/spdk/queue.h 00:02:19.932 TEST_HEADER include/spdk/reduce.h 00:02:19.932 TEST_HEADER include/spdk/rpc.h 00:02:19.932 TEST_HEADER include/spdk/scsi.h 00:02:19.932 TEST_HEADER include/spdk/scheduler.h 00:02:19.932 TEST_HEADER include/spdk/sock.h 00:02:19.932 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.932 CC app/spdk_dd/spdk_dd.o 00:02:19.932 TEST_HEADER include/spdk/stdinc.h 00:02:19.932 TEST_HEADER include/spdk/trace_parser.h 00:02:19.932 TEST_HEADER include/spdk/string.h 00:02:19.932 TEST_HEADER include/spdk/thread.h 00:02:19.932 TEST_HEADER include/spdk/trace.h 00:02:19.932 TEST_HEADER include/spdk/tree.h 00:02:19.932 TEST_HEADER include/spdk/ublk.h 00:02:19.932 TEST_HEADER include/spdk/util.h 00:02:19.932 TEST_HEADER include/spdk/uuid.h 00:02:19.932 TEST_HEADER include/spdk/version.h 00:02:19.932 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.932 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.932 TEST_HEADER include/spdk/vhost.h 00:02:19.932 TEST_HEADER include/spdk/vmd.h 00:02:19.932 TEST_HEADER include/spdk/xor.h 00:02:19.932 TEST_HEADER include/spdk/zipf.h 00:02:19.932 CC app/spdk_tgt/spdk_tgt.o 00:02:19.932 CXX test/cpp_headers/accel.o 00:02:19.932 CXX test/cpp_headers/accel_module.o 00:02:19.932 CXX test/cpp_headers/assert.o 00:02:19.932 CXX test/cpp_headers/barrier.o 00:02:19.932 CXX test/cpp_headers/base64.o 00:02:19.932 CXX test/cpp_headers/bdev.o 00:02:19.932 CXX test/cpp_headers/bdev_zone.o 00:02:19.932 CXX test/cpp_headers/bdev_module.o 00:02:19.932 CXX test/cpp_headers/bit_array.o 00:02:19.932 CXX test/cpp_headers/bit_pool.o 00:02:19.932 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.932 CXX test/cpp_headers/blob_bdev.o 00:02:19.932 CXX test/cpp_headers/blobfs.o 00:02:19.932 CXX test/cpp_headers/blob.o 00:02:19.932 CXX test/cpp_headers/conf.o 00:02:19.932 CXX test/cpp_headers/cpuset.o 00:02:19.932 CXX test/cpp_headers/config.o 00:02:19.932 CXX test/cpp_headers/crc32.o 00:02:19.932 CXX test/cpp_headers/crc16.o 00:02:19.932 CXX test/cpp_headers/crc64.o 00:02:19.932 CXX test/cpp_headers/dif.o 00:02:19.932 CXX test/cpp_headers/dma.o 00:02:19.932 CXX test/cpp_headers/endian.o 00:02:19.932 CXX test/cpp_headers/env_dpdk.o 00:02:19.932 CXX test/cpp_headers/env.o 00:02:19.932 CXX test/cpp_headers/event.o 00:02:19.932 CXX test/cpp_headers/fd_group.o 00:02:19.932 CXX test/cpp_headers/fd.o 00:02:19.932 CXX test/cpp_headers/file.o 00:02:19.932 CC examples/ioat/perf/perf.o 00:02:19.932 CXX test/cpp_headers/ftl.o 00:02:19.932 CXX test/cpp_headers/gpt_spec.o 00:02:19.932 CXX test/cpp_headers/hexlify.o 00:02:19.932 CXX test/cpp_headers/histogram_data.o 00:02:19.932 CXX test/cpp_headers/idxd_spec.o 00:02:19.932 CXX test/cpp_headers/init.o 00:02:19.932 CXX test/cpp_headers/idxd.o 00:02:20.200 CXX test/cpp_headers/ioat.o 00:02:20.200 CXX test/cpp_headers/iscsi_spec.o 00:02:20.200 CXX test/cpp_headers/ioat_spec.o 00:02:20.200 CXX test/cpp_headers/json.o 00:02:20.200 CXX test/cpp_headers/jsonrpc.o 00:02:20.200 CXX test/cpp_headers/keyring.o 00:02:20.200 CXX test/cpp_headers/keyring_module.o 00:02:20.200 CXX test/cpp_headers/likely.o 00:02:20.200 CXX test/cpp_headers/log.o 00:02:20.200 CXX test/cpp_headers/memory.o 00:02:20.200 CXX test/cpp_headers/lvol.o 00:02:20.200 CXX test/cpp_headers/nbd.o 00:02:20.200 CXX test/cpp_headers/mmio.o 00:02:20.200 CC examples/util/zipf/zipf.o 00:02:20.200 CXX test/cpp_headers/notify.o 00:02:20.200 CXX test/cpp_headers/nvme_intel.o 00:02:20.200 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.200 CXX test/cpp_headers/nvme.o 00:02:20.200 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.200 CXX test/cpp_headers/nvme_spec.o 00:02:20.200 CXX test/cpp_headers/nvme_zns.o 00:02:20.200 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.200 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.200 CXX test/cpp_headers/nvmf.o 00:02:20.200 CXX test/cpp_headers/nvmf_spec.o 00:02:20.200 CXX test/cpp_headers/nvmf_transport.o 00:02:20.200 CXX test/cpp_headers/opal_spec.o 00:02:20.200 CXX test/cpp_headers/opal.o 00:02:20.200 CXX test/cpp_headers/pci_ids.o 00:02:20.200 CXX test/cpp_headers/pipe.o 00:02:20.200 CXX test/cpp_headers/queue.o 00:02:20.200 CXX test/cpp_headers/reduce.o 00:02:20.200 CC examples/ioat/verify/verify.o 00:02:20.200 CC test/app/jsoncat/jsoncat.o 00:02:20.200 CC app/fio/nvme/fio_plugin.o 00:02:20.200 CC test/env/vtophys/vtophys.o 00:02:20.200 CC test/app/histogram_perf/histogram_perf.o 00:02:20.200 CXX test/cpp_headers/rpc.o 00:02:20.200 CC test/env/pci/pci_ut.o 00:02:20.200 CC test/env/memory/memory_ut.o 00:02:20.200 CC test/thread/poller_perf/poller_perf.o 00:02:20.200 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.200 CC test/app/stub/stub.o 00:02:20.200 CC test/app/bdev_svc/bdev_svc.o 00:02:20.200 CC app/fio/bdev/fio_plugin.o 00:02:20.200 CC test/dma/test_dma/test_dma.o 00:02:20.200 CXX test/cpp_headers/scheduler.o 00:02:20.200 LINK spdk_lspci 00:02:20.467 LINK rpc_client_test 00:02:20.467 LINK interrupt_tgt 00:02:20.467 LINK spdk_trace_record 00:02:20.467 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.467 LINK spdk_nvme_discover 00:02:20.467 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.729 LINK jsoncat 00:02:20.729 LINK nvmf_tgt 00:02:20.729 LINK vtophys 00:02:20.729 LINK ioat_perf 00:02:20.729 LINK zipf 00:02:20.729 LINK histogram_perf 00:02:20.729 CXX test/cpp_headers/scsi.o 00:02:20.729 CXX test/cpp_headers/scsi_spec.o 00:02:20.729 CXX test/cpp_headers/sock.o 00:02:20.729 CXX test/cpp_headers/stdinc.o 00:02:20.729 CXX test/cpp_headers/string.o 00:02:20.729 CXX test/cpp_headers/thread.o 00:02:20.729 CXX test/cpp_headers/trace.o 00:02:20.729 CXX test/cpp_headers/trace_parser.o 00:02:20.729 CXX test/cpp_headers/tree.o 00:02:20.729 CXX test/cpp_headers/ublk.o 00:02:20.729 CXX test/cpp_headers/util.o 00:02:20.729 LINK verify 00:02:20.729 LINK iscsi_tgt 00:02:20.729 CXX test/cpp_headers/uuid.o 00:02:20.729 LINK poller_perf 00:02:20.729 CXX test/cpp_headers/version.o 00:02:20.729 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.729 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.729 CXX test/cpp_headers/vhost.o 00:02:20.729 CXX test/cpp_headers/xor.o 00:02:20.729 CXX test/cpp_headers/vmd.o 00:02:20.729 CXX test/cpp_headers/zipf.o 00:02:20.729 LINK spdk_tgt 00:02:20.729 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.729 LINK env_dpdk_post_init 00:02:20.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.729 LINK stub 00:02:20.729 LINK spdk_dd 00:02:20.729 LINK bdev_svc 00:02:20.986 LINK pci_ut 00:02:20.986 LINK spdk_trace 00:02:20.986 LINK test_dma 00:02:20.986 LINK nvme_fuzz 00:02:20.986 LINK spdk_nvme 00:02:21.244 CC examples/idxd/perf/perf.o 00:02:21.244 CC examples/vmd/lsvmd/lsvmd.o 00:02:21.244 CC examples/vmd/led/led.o 00:02:21.244 CC examples/sock/hello_world/hello_sock.o 00:02:21.244 LINK spdk_nvme_perf 00:02:21.244 LINK spdk_nvme_identify 00:02:21.244 LINK vhost_fuzz 00:02:21.244 CC examples/thread/thread/thread_ex.o 00:02:21.244 LINK spdk_bdev 00:02:21.244 CC test/event/reactor/reactor.o 00:02:21.244 CC test/event/event_perf/event_perf.o 00:02:21.244 CC test/event/reactor_perf/reactor_perf.o 00:02:21.244 CC test/event/app_repeat/app_repeat.o 00:02:21.244 CC test/event/scheduler/scheduler.o 00:02:21.244 LINK lsvmd 00:02:21.244 LINK mem_callbacks 00:02:21.244 LINK led 00:02:21.244 CC app/vhost/vhost.o 00:02:21.244 LINK spdk_top 00:02:21.244 LINK hello_sock 00:02:21.512 LINK reactor 00:02:21.512 LINK reactor_perf 00:02:21.512 LINK event_perf 00:02:21.512 LINK idxd_perf 00:02:21.512 LINK app_repeat 00:02:21.512 LINK thread 00:02:21.512 CC test/nvme/aer/aer.o 00:02:21.512 CC test/nvme/boot_partition/boot_partition.o 00:02:21.512 CC test/nvme/e2edp/nvme_dp.o 00:02:21.512 CC test/nvme/err_injection/err_injection.o 00:02:21.512 CC test/nvme/reset/reset.o 00:02:21.512 CC test/nvme/connect_stress/connect_stress.o 00:02:21.512 CC test/nvme/overhead/overhead.o 00:02:21.512 CC test/nvme/startup/startup.o 00:02:21.512 CC test/nvme/reserve/reserve.o 00:02:21.512 CC test/nvme/simple_copy/simple_copy.o 00:02:21.512 CC test/nvme/cuse/cuse.o 00:02:21.512 CC test/nvme/fdp/fdp.o 00:02:21.512 CC test/nvme/sgl/sgl.o 00:02:21.512 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.512 CC test/accel/dif/dif.o 00:02:21.512 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.512 CC test/nvme/compliance/nvme_compliance.o 00:02:21.512 LINK scheduler 00:02:21.512 LINK memory_ut 00:02:21.512 CC test/blobfs/mkfs/mkfs.o 00:02:21.512 LINK vhost 00:02:21.512 CC test/lvol/esnap/esnap.o 00:02:21.512 LINK boot_partition 00:02:21.771 LINK connect_stress 00:02:21.771 LINK err_injection 00:02:21.771 LINK startup 00:02:21.771 LINK fused_ordering 00:02:21.771 LINK doorbell_aers 00:02:21.771 LINK reserve 00:02:21.771 LINK simple_copy 00:02:21.771 LINK nvme_dp 00:02:21.771 LINK aer 00:02:21.771 LINK reset 00:02:21.771 LINK mkfs 00:02:21.771 LINK sgl 00:02:21.771 LINK overhead 00:02:21.771 LINK fdp 00:02:21.771 LINK nvme_compliance 00:02:21.771 CC examples/nvme/hello_world/hello_world.o 00:02:21.771 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.771 CC examples/nvme/reconnect/reconnect.o 00:02:21.771 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.771 CC examples/nvme/hotplug/hotplug.o 00:02:21.771 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.771 CC examples/nvme/arbitration/arbitration.o 00:02:21.771 CC examples/nvme/abort/abort.o 00:02:21.771 LINK dif 00:02:22.029 CC examples/accel/perf/accel_perf.o 00:02:22.029 CC examples/blob/cli/blobcli.o 00:02:22.029 CC examples/blob/hello_world/hello_blob.o 00:02:22.029 LINK cmb_copy 00:02:22.029 LINK pmr_persistence 00:02:22.029 LINK hotplug 00:02:22.029 LINK hello_world 00:02:22.029 LINK reconnect 00:02:22.029 LINK abort 00:02:22.029 LINK arbitration 00:02:22.288 LINK nvme_manage 00:02:22.288 LINK iscsi_fuzz 00:02:22.288 LINK hello_blob 00:02:22.288 LINK accel_perf 00:02:22.288 LINK blobcli 00:02:22.288 CC test/bdev/bdevio/bdevio.o 00:02:22.546 LINK cuse 00:02:22.804 LINK bdevio 00:02:22.804 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.804 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.063 LINK hello_bdev 00:02:23.321 LINK bdevperf 00:02:23.887 CC examples/nvmf/nvmf/nvmf.o 00:02:24.172 LINK nvmf 00:02:25.139 LINK esnap 00:02:25.139 00:02:25.139 real 0m33.650s 00:02:25.139 user 5m8.522s 00:02:25.139 sys 2m24.089s 00:02:25.139 19:08:35 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:25.139 19:08:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.139 ************************************ 00:02:25.139 END TEST make 00:02:25.139 ************************************ 00:02:25.398 19:08:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:25.398 19:08:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.398 19:08:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.398 19:08:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.398 19:08:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.398 19:08:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.398 19:08:36 -- pm/common@44 -- $ pid=1304161 00:02:25.398 19:08:36 -- pm/common@50 -- $ kill -TERM 1304161 00:02:25.398 19:08:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.398 19:08:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.398 19:08:36 -- pm/common@44 -- $ pid=1304163 00:02:25.398 19:08:36 -- pm/common@50 -- $ kill -TERM 1304163 00:02:25.398 19:08:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.398 19:08:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.398 19:08:36 -- pm/common@44 -- $ pid=1304165 00:02:25.398 19:08:36 -- pm/common@50 -- $ kill -TERM 1304165 00:02:25.398 19:08:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.398 19:08:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.398 19:08:36 -- pm/common@44 -- $ pid=1304187 00:02:25.398 19:08:36 -- pm/common@50 -- $ sudo -E kill -TERM 1304187 00:02:25.398 19:08:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.398 19:08:36 -- nvmf/common.sh@7 -- # uname -s 00:02:25.398 19:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.398 19:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.398 19:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.398 19:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.398 19:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.398 19:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.398 19:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.398 19:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.398 19:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.398 19:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.398 19:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.398 19:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.398 19:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.398 19:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.398 19:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.398 19:08:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.398 19:08:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.398 19:08:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.398 19:08:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.398 19:08:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.398 19:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.398 19:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.398 19:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.398 19:08:36 -- paths/export.sh@5 -- # export PATH 00:02:25.398 19:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.398 19:08:36 -- nvmf/common.sh@47 -- # : 0 00:02:25.398 19:08:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.398 19:08:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.398 19:08:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.398 19:08:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.398 19:08:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.398 19:08:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.398 19:08:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.398 19:08:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.398 19:08:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.398 19:08:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.398 19:08:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.398 19:08:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.398 19:08:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.398 19:08:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.398 19:08:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.398 19:08:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.398 19:08:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.398 19:08:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.398 19:08:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.398 19:08:36 -- spdk/autotest.sh@48 -- # udevadm_pid=1378114 00:02:25.398 19:08:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.398 19:08:36 -- pm/common@17 -- # local monitor 00:02:25.398 19:08:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.399 19:08:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.399 19:08:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.399 19:08:36 -- pm/common@21 -- # date +%s 00:02:25.399 19:08:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.399 19:08:36 -- pm/common@21 -- # date +%s 00:02:25.399 19:08:36 -- pm/common@25 -- # sleep 1 00:02:25.399 19:08:36 -- pm/common@21 -- # date +%s 00:02:25.399 19:08:36 -- pm/common@21 -- # date +%s 00:02:25.399 19:08:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721063316 00:02:25.399 19:08:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721063316 00:02:25.399 19:08:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721063316 00:02:25.399 19:08:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721063316 00:02:25.399 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721063316_collect-vmstat.pm.log 00:02:25.399 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721063316_collect-cpu-load.pm.log 00:02:25.399 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721063316_collect-cpu-temp.pm.log 00:02:25.399 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721063316_collect-bmc-pm.bmc.pm.log 00:02:26.334 19:08:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.592 19:08:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.592 19:08:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:26.592 19:08:37 -- common/autotest_common.sh@10 -- # set +x 00:02:26.592 19:08:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.592 19:08:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:26.592 19:08:37 -- common/autotest_common.sh@10 -- # set +x 00:02:26.592 19:08:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.592 19:08:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.592 19:08:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.592 19:08:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.592 19:08:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.592 19:08:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.592 19:08:37 -- common/autotest_common.sh@1455 -- # uname 00:02:26.592 19:08:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:26.592 19:08:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.592 19:08:37 -- common/autotest_common.sh@1475 -- # uname 00:02:26.592 19:08:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:26.592 19:08:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.592 19:08:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.592 19:08:37 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.592 19:08:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.592 19:08:37 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.592 --rc lcov_branch_coverage=1 00:02:26.592 --rc lcov_function_coverage=1 00:02:26.592 --rc genhtml_branch_coverage=1 00:02:26.592 --rc genhtml_function_coverage=1 00:02:26.592 --rc genhtml_legend=1 00:02:26.592 --rc geninfo_all_blocks=1 00:02:26.592 ' 00:02:26.592 19:08:37 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.592 --rc lcov_branch_coverage=1 00:02:26.592 --rc lcov_function_coverage=1 00:02:26.592 --rc genhtml_branch_coverage=1 00:02:26.592 --rc genhtml_function_coverage=1 00:02:26.592 --rc genhtml_legend=1 00:02:26.592 --rc geninfo_all_blocks=1 00:02:26.592 ' 00:02:26.592 19:08:37 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.592 --rc lcov_branch_coverage=1 00:02:26.592 --rc lcov_function_coverage=1 00:02:26.592 --rc genhtml_branch_coverage=1 00:02:26.592 --rc genhtml_function_coverage=1 00:02:26.592 --rc genhtml_legend=1 00:02:26.592 --rc geninfo_all_blocks=1 00:02:26.592 --no-external' 00:02:26.592 19:08:37 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.592 --rc lcov_branch_coverage=1 00:02:26.592 --rc lcov_function_coverage=1 00:02:26.592 --rc genhtml_branch_coverage=1 00:02:26.592 --rc genhtml_function_coverage=1 00:02:26.592 --rc genhtml_legend=1 00:02:26.592 --rc geninfo_all_blocks=1 00:02:26.592 --no-external' 00:02:26.592 19:08:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.592 lcov: LCOV version 1.14 00:02:26.592 19:08:37 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:38.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:48.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.760 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:51.320 19:09:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:51.320 19:09:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:51.320 19:09:01 -- common/autotest_common.sh@10 -- # set +x 00:02:51.320 19:09:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:51.320 19:09:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.851 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:53.851 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.851 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.851 19:09:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.851 19:09:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.851 19:09:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.851 19:09:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.851 19:09:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.851 19:09:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.851 19:09:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.851 19:09:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.851 19:09:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.851 19:09:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.851 19:09:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.851 19:09:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.851 19:09:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.851 19:09:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.851 19:09:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.851 No valid GPT data, bailing 00:02:53.851 19:09:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.851 19:09:04 -- scripts/common.sh@391 -- # pt= 00:02:53.851 19:09:04 -- scripts/common.sh@392 -- # return 1 00:02:53.851 19:09:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.851 1+0 records in 00:02:53.851 1+0 records out 00:02:53.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00165314 s, 634 MB/s 00:02:53.851 19:09:04 -- spdk/autotest.sh@118 -- # sync 00:02:53.851 19:09:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.851 19:09:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.851 19:09:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.121 19:09:09 -- spdk/autotest.sh@124 -- # uname -s 00:02:59.121 19:09:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:59.121 19:09:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.121 19:09:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.121 19:09:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.121 19:09:09 -- common/autotest_common.sh@10 -- # set +x 00:02:59.121 ************************************ 00:02:59.121 START TEST setup.sh 00:02:59.121 ************************************ 00:02:59.121 19:09:09 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:59.378 * Looking for test storage... 00:02:59.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.378 19:09:09 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:59.378 19:09:09 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:59.378 19:09:09 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.378 19:09:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.378 19:09:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.378 19:09:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.378 ************************************ 00:02:59.378 START TEST acl 00:02:59.378 ************************************ 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:59.378 * Looking for test storage... 00:02:59.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.378 19:09:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:59.378 19:09:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:59.378 19:09:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.378 19:09:10 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.663 19:09:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:02.663 19:09:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:02.663 19:09:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:02.663 19:09:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:02.663 19:09:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.663 19:09:13 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:05.196 Hugepages 00:03:05.197 node hugesize free / total 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 00:03:05.197 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:05.197 19:09:15 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.197 19:09:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.197 19:09:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.197 19:09:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.197 ************************************ 00:03:05.197 START TEST denied 00:03:05.197 ************************************ 00:03:05.197 19:09:15 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:05.197 19:09:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:05.197 19:09:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:05.197 19:09:15 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:05.197 19:09:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.197 19:09:15 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.484 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.484 19:09:18 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.769 00:03:11.769 real 0m6.669s 00:03:11.769 user 0m2.132s 00:03:11.769 sys 0m3.857s 00:03:11.769 19:09:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.769 19:09:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:11.769 ************************************ 00:03:11.769 END TEST denied 00:03:11.769 ************************************ 00:03:11.769 19:09:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:11.769 19:09:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:11.769 19:09:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.769 19:09:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.769 19:09:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.769 ************************************ 00:03:11.769 START TEST allowed 00:03:11.769 ************************************ 00:03:11.769 19:09:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:11.769 19:09:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:11.769 19:09:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:11.769 19:09:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:11.769 19:09:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.769 19:09:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.957 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.957 19:09:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:15.957 19:09:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:15.957 19:09:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:15.957 19:09:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.957 19:09:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.497 00:03:18.497 real 0m6.459s 00:03:18.497 user 0m1.994s 00:03:18.497 sys 0m3.586s 00:03:18.497 19:09:29 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.497 19:09:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.497 ************************************ 00:03:18.497 END TEST allowed 00:03:18.497 ************************************ 00:03:18.497 19:09:29 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:18.497 00:03:18.497 real 0m19.008s 00:03:18.497 user 0m6.350s 00:03:18.497 sys 0m11.264s 00:03:18.497 19:09:29 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.497 19:09:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.497 ************************************ 00:03:18.497 END TEST acl 00:03:18.497 ************************************ 00:03:18.497 19:09:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:18.497 19:09:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.497 19:09:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.497 19:09:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.497 19:09:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.497 ************************************ 00:03:18.497 START TEST hugepages 00:03:18.497 ************************************ 00:03:18.497 19:09:29 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.497 * Looking for test storage... 00:03:18.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 171810444 kB' 'MemAvailable: 174691048 kB' 'Buffers: 3896 kB' 'Cached: 11715420 kB' 'SwapCached: 0 kB' 'Active: 8728340 kB' 'Inactive: 3507524 kB' 'Active(anon): 8336332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519900 kB' 'Mapped: 214732 kB' 'Shmem: 7819784 kB' 'KReclaimable: 251048 kB' 'Slab: 841236 kB' 'SReclaimable: 251048 kB' 'SUnreclaim: 590188 kB' 'KernelStack: 20448 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 9872128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.497 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.498 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.499 19:09:29 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.499 19:09:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.499 19:09:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.499 19:09:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.499 ************************************ 00:03:18.499 START TEST default_setup 00:03:18.499 ************************************ 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.499 19:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.076 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:21.076 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.021 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173995432 kB' 'MemAvailable: 176876056 kB' 'Buffers: 3896 kB' 'Cached: 11715516 kB' 'SwapCached: 0 kB' 'Active: 8745480 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353472 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537048 kB' 'Mapped: 214872 kB' 'Shmem: 7819880 kB' 'KReclaimable: 251088 kB' 'Slab: 840040 kB' 'SReclaimable: 251088 kB' 'SUnreclaim: 588952 kB' 'KernelStack: 20528 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9892608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.021 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.022 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173996832 kB' 'MemAvailable: 176877456 kB' 'Buffers: 3896 kB' 'Cached: 11715520 kB' 'SwapCached: 0 kB' 'Active: 8745428 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353420 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536956 kB' 'Mapped: 214840 kB' 'Shmem: 7819884 kB' 'KReclaimable: 251088 kB' 'Slab: 840084 kB' 'SReclaimable: 251088 kB' 'SUnreclaim: 588996 kB' 'KernelStack: 20560 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9892624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.023 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.024 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173996056 kB' 'MemAvailable: 176876680 kB' 'Buffers: 3896 kB' 'Cached: 11715540 kB' 'SwapCached: 0 kB' 'Active: 8746224 kB' 'Inactive: 3507524 kB' 'Active(anon): 8354216 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537768 kB' 'Mapped: 214840 kB' 'Shmem: 7819904 kB' 'KReclaimable: 251088 kB' 'Slab: 840068 kB' 'SReclaimable: 251088 kB' 'SUnreclaim: 588980 kB' 'KernelStack: 20544 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9895280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.025 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.026 nr_hugepages=1024 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.026 resv_hugepages=0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.026 surplus_hugepages=0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.026 anon_hugepages=0 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.026 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173996056 kB' 'MemAvailable: 176876680 kB' 'Buffers: 3896 kB' 'Cached: 11715540 kB' 'SwapCached: 0 kB' 'Active: 8746144 kB' 'Inactive: 3507524 kB' 'Active(anon): 8354136 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537708 kB' 'Mapped: 214840 kB' 'Shmem: 7819904 kB' 'KReclaimable: 251088 kB' 'Slab: 840068 kB' 'SReclaimable: 251088 kB' 'SUnreclaim: 588980 kB' 'KernelStack: 20640 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9895292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.027 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85027824 kB' 'MemUsed: 12634860 kB' 'SwapCached: 0 kB' 'Active: 6064500 kB' 'Inactive: 3335888 kB' 'Active(anon): 5906960 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9239996 kB' 'Mapped: 122716 kB' 'AnonPages: 163752 kB' 'Shmem: 5746568 kB' 'KernelStack: 11816 kB' 'PageTables: 4740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133612 kB' 'Slab: 406812 kB' 'SReclaimable: 133612 kB' 'SUnreclaim: 273200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.028 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.029 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.030 node0=1024 expecting 1024 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.030 00:03:22.030 real 0m3.571s 00:03:22.030 user 0m1.055s 00:03:22.030 sys 0m1.707s 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.030 19:09:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:22.030 ************************************ 00:03:22.030 END TEST default_setup 00:03:22.030 ************************************ 00:03:22.289 19:09:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.289 19:09:32 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.289 19:09:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.289 19:09:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.289 19:09:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.289 ************************************ 00:03:22.289 START TEST per_node_1G_alloc 00:03:22.289 ************************************ 00:03:22.289 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.290 19:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.851 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.851 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.851 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173978124 kB' 'MemAvailable: 176858748 kB' 'Buffers: 3896 kB' 'Cached: 11715652 kB' 'SwapCached: 0 kB' 'Active: 8746884 kB' 'Inactive: 3507524 kB' 'Active(anon): 8354876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537596 kB' 'Mapped: 214956 kB' 'Shmem: 7820016 kB' 'KReclaimable: 251088 kB' 'Slab: 840076 kB' 'SReclaimable: 251088 kB' 'SUnreclaim: 588988 kB' 'KernelStack: 20688 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9893920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.851 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.852 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173980808 kB' 'MemAvailable: 176861428 kB' 'Buffers: 3896 kB' 'Cached: 11715656 kB' 'SwapCached: 0 kB' 'Active: 8747392 kB' 'Inactive: 3507524 kB' 'Active(anon): 8355384 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538692 kB' 'Mapped: 214868 kB' 'Shmem: 7820020 kB' 'KReclaimable: 251080 kB' 'Slab: 840084 kB' 'SReclaimable: 251080 kB' 'SUnreclaim: 589004 kB' 'KernelStack: 20880 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9926364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315804 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.853 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173981600 kB' 'MemAvailable: 176862220 kB' 'Buffers: 3896 kB' 'Cached: 11715656 kB' 'SwapCached: 0 kB' 'Active: 8747532 kB' 'Inactive: 3507524 kB' 'Active(anon): 8355524 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538820 kB' 'Mapped: 214868 kB' 'Shmem: 7820020 kB' 'KReclaimable: 251080 kB' 'Slab: 840084 kB' 'SReclaimable: 251080 kB' 'SUnreclaim: 589004 kB' 'KernelStack: 20832 kB' 'PageTables: 9840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9895088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.854 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.855 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.856 nr_hugepages=1024 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.856 resv_hugepages=0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.856 surplus_hugepages=0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.856 anon_hugepages=0 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.856 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173980400 kB' 'MemAvailable: 176861020 kB' 'Buffers: 3896 kB' 'Cached: 11715660 kB' 'SwapCached: 0 kB' 'Active: 8747648 kB' 'Inactive: 3507524 kB' 'Active(anon): 8355640 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538388 kB' 'Mapped: 214868 kB' 'Shmem: 7820024 kB' 'KReclaimable: 251080 kB' 'Slab: 840084 kB' 'SReclaimable: 251080 kB' 'SUnreclaim: 589004 kB' 'KernelStack: 20720 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9895112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315772 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.857 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.858 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.119 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86068980 kB' 'MemUsed: 11593704 kB' 'SwapCached: 0 kB' 'Active: 6063152 kB' 'Inactive: 3335888 kB' 'Active(anon): 5905612 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240148 kB' 'Mapped: 122732 kB' 'AnonPages: 162108 kB' 'Shmem: 5746720 kB' 'KernelStack: 11656 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133612 kB' 'Slab: 406940 kB' 'SReclaimable: 133612 kB' 'SUnreclaim: 273328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.120 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87910660 kB' 'MemUsed: 5807808 kB' 'SwapCached: 0 kB' 'Active: 2683244 kB' 'Inactive: 171636 kB' 'Active(anon): 2448776 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479468 kB' 'Mapped: 92136 kB' 'AnonPages: 375412 kB' 'Shmem: 2073364 kB' 'KernelStack: 8952 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117468 kB' 'Slab: 433144 kB' 'SReclaimable: 117468 kB' 'SUnreclaim: 315676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.121 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.122 node0=512 expecting 512 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.122 node1=512 expecting 512 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.122 00:03:25.122 real 0m2.864s 00:03:25.122 user 0m1.189s 00:03:25.122 sys 0m1.739s 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.122 19:09:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.122 ************************************ 00:03:25.122 END TEST per_node_1G_alloc 00:03:25.122 ************************************ 00:03:25.122 19:09:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.122 19:09:35 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:25.122 19:09:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.122 19:09:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.122 19:09:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.122 ************************************ 00:03:25.122 START TEST even_2G_alloc 00:03:25.122 ************************************ 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.122 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.123 19:09:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.661 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.661 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.661 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174004408 kB' 'MemAvailable: 176885020 kB' 'Buffers: 3896 kB' 'Cached: 11715816 kB' 'SwapCached: 0 kB' 'Active: 8744148 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352140 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534720 kB' 'Mapped: 213828 kB' 'Shmem: 7820180 kB' 'KReclaimable: 251064 kB' 'Slab: 839728 kB' 'SReclaimable: 251064 kB' 'SUnreclaim: 588664 kB' 'KernelStack: 20736 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9879400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.661 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.662 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.927 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174005052 kB' 'MemAvailable: 176885664 kB' 'Buffers: 3896 kB' 'Cached: 11715816 kB' 'SwapCached: 0 kB' 'Active: 8744636 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352628 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535236 kB' 'Mapped: 213828 kB' 'Shmem: 7820180 kB' 'KReclaimable: 251064 kB' 'Slab: 839728 kB' 'SReclaimable: 251064 kB' 'SUnreclaim: 588664 kB' 'KernelStack: 20736 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.928 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.929 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174007080 kB' 'MemAvailable: 176887692 kB' 'Buffers: 3896 kB' 'Cached: 11715828 kB' 'SwapCached: 0 kB' 'Active: 8744420 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352412 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535500 kB' 'Mapped: 213752 kB' 'Shmem: 7820192 kB' 'KReclaimable: 251064 kB' 'Slab: 839696 kB' 'SReclaimable: 251064 kB' 'SUnreclaim: 588632 kB' 'KernelStack: 20768 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.930 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.931 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.931 nr_hugepages=1024 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.932 resv_hugepages=0 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.932 surplus_hugepages=0 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.932 anon_hugepages=0 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174008008 kB' 'MemAvailable: 176888620 kB' 'Buffers: 3896 kB' 'Cached: 11715852 kB' 'SwapCached: 0 kB' 'Active: 8744424 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352416 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535396 kB' 'Mapped: 213752 kB' 'Shmem: 7820216 kB' 'KReclaimable: 251064 kB' 'Slab: 839696 kB' 'SReclaimable: 251064 kB' 'SUnreclaim: 588632 kB' 'KernelStack: 20832 kB' 'PageTables: 9580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.932 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.933 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86085720 kB' 'MemUsed: 11576964 kB' 'SwapCached: 0 kB' 'Active: 6062984 kB' 'Inactive: 3335888 kB' 'Active(anon): 5905444 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240256 kB' 'Mapped: 122352 kB' 'AnonPages: 161768 kB' 'Shmem: 5746828 kB' 'KernelStack: 11736 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133612 kB' 'Slab: 406800 kB' 'SReclaimable: 133612 kB' 'SUnreclaim: 273188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.934 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87920896 kB' 'MemUsed: 5797572 kB' 'SwapCached: 0 kB' 'Active: 2681444 kB' 'Inactive: 171636 kB' 'Active(anon): 2446976 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479492 kB' 'Mapped: 91400 kB' 'AnonPages: 373660 kB' 'Shmem: 2073388 kB' 'KernelStack: 8968 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117452 kB' 'Slab: 432896 kB' 'SReclaimable: 117452 kB' 'SUnreclaim: 315444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.935 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.936 node0=512 expecting 512 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.936 node1=512 expecting 512 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.936 00:03:27.936 real 0m2.835s 00:03:27.936 user 0m1.176s 00:03:27.936 sys 0m1.724s 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.936 19:09:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.936 ************************************ 00:03:27.936 END TEST even_2G_alloc 00:03:27.936 ************************************ 00:03:27.936 19:09:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.936 19:09:38 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.936 19:09:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.936 19:09:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.936 19:09:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.936 ************************************ 00:03:27.936 START TEST odd_alloc 00:03:27.936 ************************************ 00:03:27.936 19:09:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:27.936 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.936 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.936 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.936 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.937 19:09:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.501 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.501 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.501 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174006344 kB' 'MemAvailable: 176886936 kB' 'Buffers: 3896 kB' 'Cached: 11715960 kB' 'SwapCached: 0 kB' 'Active: 8744356 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352348 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535216 kB' 'Mapped: 213752 kB' 'Shmem: 7820324 kB' 'KReclaimable: 251024 kB' 'Slab: 839160 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588136 kB' 'KernelStack: 20960 kB' 'PageTables: 9960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9881236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.501 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174004232 kB' 'MemAvailable: 176884824 kB' 'Buffers: 3896 kB' 'Cached: 11715960 kB' 'SwapCached: 0 kB' 'Active: 8743236 kB' 'Inactive: 3507524 kB' 'Active(anon): 8351228 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534088 kB' 'Mapped: 213816 kB' 'Shmem: 7820324 kB' 'KReclaimable: 251024 kB' 'Slab: 839232 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588208 kB' 'KernelStack: 20720 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9881256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.502 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.503 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174004396 kB' 'MemAvailable: 176884988 kB' 'Buffers: 3896 kB' 'Cached: 11715976 kB' 'SwapCached: 0 kB' 'Active: 8745216 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353208 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535980 kB' 'Mapped: 213740 kB' 'Shmem: 7820340 kB' 'KReclaimable: 251024 kB' 'Slab: 839408 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588384 kB' 'KernelStack: 21088 kB' 'PageTables: 10668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9881280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.504 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.505 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:30.506 nr_hugepages=1025 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.506 resv_hugepages=0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.506 surplus_hugepages=0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.506 anon_hugepages=0 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174001880 kB' 'MemAvailable: 176882472 kB' 'Buffers: 3896 kB' 'Cached: 11715976 kB' 'SwapCached: 0 kB' 'Active: 8746620 kB' 'Inactive: 3507524 kB' 'Active(anon): 8354612 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536880 kB' 'Mapped: 213740 kB' 'Shmem: 7820340 kB' 'KReclaimable: 251024 kB' 'Slab: 839408 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588384 kB' 'KernelStack: 21424 kB' 'PageTables: 11040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9881808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.506 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.768 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.769 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86089336 kB' 'MemUsed: 11573348 kB' 'SwapCached: 0 kB' 'Active: 6063704 kB' 'Inactive: 3335888 kB' 'Active(anon): 5906164 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240400 kB' 'Mapped: 122344 kB' 'AnonPages: 162292 kB' 'Shmem: 5746972 kB' 'KernelStack: 12008 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133580 kB' 'Slab: 406280 kB' 'SReclaimable: 133580 kB' 'SUnreclaim: 272700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.770 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87913776 kB' 'MemUsed: 5804692 kB' 'SwapCached: 0 kB' 'Active: 2682200 kB' 'Inactive: 171636 kB' 'Active(anon): 2447732 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479524 kB' 'Mapped: 91412 kB' 'AnonPages: 374228 kB' 'Shmem: 2073420 kB' 'KernelStack: 9064 kB' 'PageTables: 4976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117444 kB' 'Slab: 433004 kB' 'SReclaimable: 117444 kB' 'SUnreclaim: 315560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.771 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:30.772 node0=512 expecting 513 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.772 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.773 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.773 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:30.773 node1=513 expecting 512 00:03:30.773 19:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:30.773 00:03:30.773 real 0m2.717s 00:03:30.773 user 0m1.030s 00:03:30.773 sys 0m1.722s 00:03:30.773 19:09:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.773 19:09:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.773 ************************************ 00:03:30.773 END TEST odd_alloc 00:03:30.773 ************************************ 00:03:30.773 19:09:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.773 19:09:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:30.773 19:09:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.773 19:09:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.773 19:09:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.773 ************************************ 00:03:30.773 START TEST custom_alloc 00:03:30.773 ************************************ 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.773 19:09:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.311 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.311 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.311 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.311 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172957504 kB' 'MemAvailable: 175838096 kB' 'Buffers: 3896 kB' 'Cached: 11716116 kB' 'SwapCached: 0 kB' 'Active: 8745404 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353396 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535544 kB' 'Mapped: 213844 kB' 'Shmem: 7820480 kB' 'KReclaimable: 251024 kB' 'Slab: 839712 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588688 kB' 'KernelStack: 20784 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9882152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315820 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.312 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172956328 kB' 'MemAvailable: 175836920 kB' 'Buffers: 3896 kB' 'Cached: 11716116 kB' 'SwapCached: 0 kB' 'Active: 8744608 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352600 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535716 kB' 'Mapped: 213740 kB' 'Shmem: 7820480 kB' 'KReclaimable: 251024 kB' 'Slab: 840148 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589124 kB' 'KernelStack: 20800 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9882168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.313 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.314 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172951612 kB' 'MemAvailable: 175832204 kB' 'Buffers: 3896 kB' 'Cached: 11716120 kB' 'SwapCached: 0 kB' 'Active: 8745880 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353872 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536480 kB' 'Mapped: 214252 kB' 'Shmem: 7820484 kB' 'KReclaimable: 251024 kB' 'Slab: 840148 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589124 kB' 'KernelStack: 20704 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9884208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.315 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.579 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:33.580 nr_hugepages=1536 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.580 resv_hugepages=0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.580 surplus_hugepages=0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.580 anon_hugepages=0 00:03:33.580 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172946832 kB' 'MemAvailable: 175827424 kB' 'Buffers: 3896 kB' 'Cached: 11716152 kB' 'SwapCached: 0 kB' 'Active: 8749544 kB' 'Inactive: 3507524 kB' 'Active(anon): 8357536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540152 kB' 'Mapped: 214252 kB' 'Shmem: 7820516 kB' 'KReclaimable: 251024 kB' 'Slab: 840148 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589124 kB' 'KernelStack: 20512 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9885708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315568 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.581 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86097980 kB' 'MemUsed: 11564704 kB' 'SwapCached: 0 kB' 'Active: 6064200 kB' 'Inactive: 3335888 kB' 'Active(anon): 5906660 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240544 kB' 'Mapped: 122316 kB' 'AnonPages: 162672 kB' 'Shmem: 5747116 kB' 'KernelStack: 11752 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133580 kB' 'Slab: 406644 kB' 'SReclaimable: 133580 kB' 'SUnreclaim: 273064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.582 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.583 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 86854644 kB' 'MemUsed: 6863824 kB' 'SwapCached: 0 kB' 'Active: 2679752 kB' 'Inactive: 171636 kB' 'Active(anon): 2445284 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171636 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479528 kB' 'Mapped: 91420 kB' 'AnonPages: 371904 kB' 'Shmem: 2073424 kB' 'KernelStack: 8824 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117444 kB' 'Slab: 433480 kB' 'SReclaimable: 117444 kB' 'SUnreclaim: 316036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.584 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:33.585 node0=512 expecting 512 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:33.585 node1=1024 expecting 1024 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:33.585 00:03:33.585 real 0m2.740s 00:03:33.585 user 0m1.111s 00:03:33.585 sys 0m1.661s 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.585 19:09:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.585 ************************************ 00:03:33.585 END TEST custom_alloc 00:03:33.585 ************************************ 00:03:33.585 19:09:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.585 19:09:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:33.585 19:09:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.585 19:09:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.585 19:09:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.585 ************************************ 00:03:33.585 START TEST no_shrink_alloc 00:03:33.585 ************************************ 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.585 19:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.127 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:36.127 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.127 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.127 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173979976 kB' 'MemAvailable: 176860568 kB' 'Buffers: 3896 kB' 'Cached: 11716256 kB' 'SwapCached: 0 kB' 'Active: 8744476 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352468 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535112 kB' 'Mapped: 213876 kB' 'Shmem: 7820620 kB' 'KReclaimable: 251024 kB' 'Slab: 840276 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589252 kB' 'KernelStack: 20592 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9882760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.128 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173979876 kB' 'MemAvailable: 176860468 kB' 'Buffers: 3896 kB' 'Cached: 11716260 kB' 'SwapCached: 0 kB' 'Active: 8745168 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353160 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535848 kB' 'Mapped: 213776 kB' 'Shmem: 7820624 kB' 'KReclaimable: 251024 kB' 'Slab: 840248 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589224 kB' 'KernelStack: 20624 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9881296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.129 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.130 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173977712 kB' 'MemAvailable: 176858304 kB' 'Buffers: 3896 kB' 'Cached: 11716280 kB' 'SwapCached: 0 kB' 'Active: 8745220 kB' 'Inactive: 3507524 kB' 'Active(anon): 8353212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535640 kB' 'Mapped: 213784 kB' 'Shmem: 7820644 kB' 'KReclaimable: 251024 kB' 'Slab: 840248 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589224 kB' 'KernelStack: 20640 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9882796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.131 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.132 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.133 nr_hugepages=1024 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.133 resv_hugepages=0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.133 surplus_hugepages=0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.133 anon_hugepages=0 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.133 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173980168 kB' 'MemAvailable: 176860760 kB' 'Buffers: 3896 kB' 'Cached: 11716320 kB' 'SwapCached: 0 kB' 'Active: 8744804 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352796 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535384 kB' 'Mapped: 213776 kB' 'Shmem: 7820684 kB' 'KReclaimable: 251024 kB' 'Slab: 840248 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 589224 kB' 'KernelStack: 20544 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9882820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.134 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.135 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85034020 kB' 'MemUsed: 12628664 kB' 'SwapCached: 0 kB' 'Active: 6063676 kB' 'Inactive: 3335888 kB' 'Active(anon): 5906136 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240668 kB' 'Mapped: 122332 kB' 'AnonPages: 162096 kB' 'Shmem: 5747240 kB' 'KernelStack: 11640 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133580 kB' 'Slab: 406752 kB' 'SReclaimable: 133580 kB' 'SUnreclaim: 273172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.136 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.137 node0=1024 expecting 1024 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.137 19:09:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.679 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.679 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.679 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.679 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:38.679 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:38.679 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.679 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.679 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.679 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173973360 kB' 'MemAvailable: 176853952 kB' 'Buffers: 3896 kB' 'Cached: 11716384 kB' 'SwapCached: 0 kB' 'Active: 8746584 kB' 'Inactive: 3507524 kB' 'Active(anon): 8354576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536468 kB' 'Mapped: 213840 kB' 'Shmem: 7820748 kB' 'KReclaimable: 251024 kB' 'Slab: 839724 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588700 kB' 'KernelStack: 20496 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9917800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.680 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173974444 kB' 'MemAvailable: 176855036 kB' 'Buffers: 3896 kB' 'Cached: 11716388 kB' 'SwapCached: 0 kB' 'Active: 8744572 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352564 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534936 kB' 'Mapped: 213764 kB' 'Shmem: 7820752 kB' 'KReclaimable: 251024 kB' 'Slab: 839628 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588604 kB' 'KernelStack: 20400 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.681 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.682 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173973756 kB' 'MemAvailable: 176854348 kB' 'Buffers: 3896 kB' 'Cached: 11716404 kB' 'SwapCached: 0 kB' 'Active: 8744380 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352372 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534764 kB' 'Mapped: 213764 kB' 'Shmem: 7820768 kB' 'KReclaimable: 251024 kB' 'Slab: 839628 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588604 kB' 'KernelStack: 20400 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.683 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.684 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.685 nr_hugepages=1024 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.685 resv_hugepages=0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.685 surplus_hugepages=0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.685 anon_hugepages=0 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173973756 kB' 'MemAvailable: 176854348 kB' 'Buffers: 3896 kB' 'Cached: 11716428 kB' 'SwapCached: 0 kB' 'Active: 8744420 kB' 'Inactive: 3507524 kB' 'Active(anon): 8352412 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534772 kB' 'Mapped: 213764 kB' 'Shmem: 7820792 kB' 'KReclaimable: 251024 kB' 'Slab: 839628 kB' 'SReclaimable: 251024 kB' 'SUnreclaim: 588604 kB' 'KernelStack: 20400 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9880484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2995156 kB' 'DirectMap2M: 16607232 kB' 'DirectMap1G: 182452224 kB' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.685 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.686 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85013116 kB' 'MemUsed: 12649568 kB' 'SwapCached: 0 kB' 'Active: 6063096 kB' 'Inactive: 3335888 kB' 'Active(anon): 5905556 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9240756 kB' 'Mapped: 122316 kB' 'AnonPages: 161368 kB' 'Shmem: 5747328 kB' 'KernelStack: 11656 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133580 kB' 'Slab: 406616 kB' 'SReclaimable: 133580 kB' 'SUnreclaim: 273036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.687 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.688 node0=1024 expecting 1024 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.688 00:03:38.688 real 0m4.963s 00:03:38.688 user 0m1.942s 00:03:38.688 sys 0m3.031s 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.688 19:09:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.688 ************************************ 00:03:38.688 END TEST no_shrink_alloc 00:03:38.688 ************************************ 00:03:38.688 19:09:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.688 19:09:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.688 00:03:38.688 real 0m20.213s 00:03:38.688 user 0m7.735s 00:03:38.688 sys 0m11.910s 00:03:38.688 19:09:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.688 19:09:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.688 ************************************ 00:03:38.688 END TEST hugepages 00:03:38.688 ************************************ 00:03:38.688 19:09:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.688 19:09:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.688 19:09:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.688 19:09:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.688 19:09:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.688 ************************************ 00:03:38.688 START TEST driver 00:03:38.688 ************************************ 00:03:38.688 19:09:49 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:38.688 * Looking for test storage... 00:03:38.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.688 19:09:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.688 19:09:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.688 19:09:49 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.919 19:09:53 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:42.919 19:09:53 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.919 19:09:53 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.919 19:09:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:42.919 ************************************ 00:03:42.919 START TEST guess_driver 00:03:42.919 ************************************ 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:42.919 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:42.919 Looking for driver=vfio-pci 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.919 19:09:53 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.456 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.457 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.457 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.457 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.026 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.026 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.026 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.284 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.284 19:09:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:46.284 19:09:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.284 19:09:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.480 00:03:50.480 real 0m7.283s 00:03:50.480 user 0m2.024s 00:03:50.480 sys 0m3.747s 00:03:50.480 19:10:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.480 19:10:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.480 ************************************ 00:03:50.481 END TEST guess_driver 00:03:50.481 ************************************ 00:03:50.481 19:10:00 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:50.481 00:03:50.481 real 0m11.177s 00:03:50.481 user 0m3.110s 00:03:50.481 sys 0m5.796s 00:03:50.481 19:10:00 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.481 19:10:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.481 ************************************ 00:03:50.481 END TEST driver 00:03:50.481 ************************************ 00:03:50.481 19:10:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:50.481 19:10:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.481 19:10:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.481 19:10:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.481 19:10:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.481 ************************************ 00:03:50.481 START TEST devices 00:03:50.481 ************************************ 00:03:50.481 19:10:00 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:50.481 * Looking for test storage... 00:03:50.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:50.481 19:10:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.481 19:10:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.481 19:10:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.481 19:10:00 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:53.018 19:10:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.018 No valid GPT data, bailing 00:03:53.018 19:10:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:53.018 19:10:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.018 19:10:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.018 19:10:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.018 19:10:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.018 ************************************ 00:03:53.018 START TEST nvme_mount 00:03:53.018 ************************************ 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.018 19:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.019 19:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:53.954 Creating new GPT entries in memory. 00:03:53.954 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:53.954 other utilities. 00:03:53.954 19:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:53.954 19:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.954 19:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.954 19:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.954 19:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:54.948 Creating new GPT entries in memory. 00:03:54.948 The operation has completed successfully. 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1409911 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:54.948 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.207 19:10:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.744 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.744 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:57.744 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.744 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:57.745 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.745 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.004 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:58.004 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:58.004 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.004 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.004 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.005 19:10:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:00.580 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.581 19:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.120 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.121 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.121 00:04:03.121 real 0m10.149s 00:04:03.121 user 0m2.881s 00:04:03.121 sys 0m5.032s 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.121 19:10:13 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.121 ************************************ 00:04:03.121 END TEST nvme_mount 00:04:03.121 ************************************ 00:04:03.121 19:10:13 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:03.121 19:10:13 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.121 19:10:13 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.121 19:10:13 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.121 19:10:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.121 ************************************ 00:04:03.121 START TEST dm_mount 00:04:03.121 ************************************ 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.121 19:10:13 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.499 Creating new GPT entries in memory. 00:04:04.499 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.499 other utilities. 00:04:04.499 19:10:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.499 19:10:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.499 19:10:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.499 19:10:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.499 19:10:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.436 Creating new GPT entries in memory. 00:04:05.436 The operation has completed successfully. 00:04:05.436 19:10:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.436 19:10:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.436 19:10:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.436 19:10:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.436 19:10:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.374 The operation has completed successfully. 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1414077 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.374 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.375 19:10:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.971 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.972 19:10:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.511 19:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:11.511 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:11.511 00:04:11.511 real 0m8.185s 00:04:11.511 user 0m1.798s 00:04:11.511 sys 0m3.326s 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.511 19:10:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:11.511 ************************************ 00:04:11.511 END TEST dm_mount 00:04:11.511 ************************************ 00:04:11.511 19:10:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:11.511 19:10:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:11.511 19:10:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:11.511 19:10:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.511 19:10:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.512 19:10:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:11.512 19:10:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.512 19:10:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:11.771 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:11.771 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:11.771 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:11.771 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:11.771 19:10:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:11.771 00:04:11.771 real 0m21.817s 00:04:11.771 user 0m5.895s 00:04:11.771 sys 0m10.472s 00:04:11.771 19:10:22 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.771 19:10:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:11.771 ************************************ 00:04:11.771 END TEST devices 00:04:11.771 ************************************ 00:04:11.771 19:10:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:11.771 00:04:11.771 real 1m12.545s 00:04:11.771 user 0m23.216s 00:04:11.771 sys 0m39.669s 00:04:11.771 19:10:22 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.771 19:10:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.771 ************************************ 00:04:11.771 END TEST setup.sh 00:04:11.771 ************************************ 00:04:11.771 19:10:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.771 19:10:22 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:14.307 Hugepages 00:04:14.307 node hugesize free / total 00:04:14.307 node0 1048576kB 0 / 0 00:04:14.307 node0 2048kB 2048 / 2048 00:04:14.307 node1 1048576kB 0 / 0 00:04:14.307 node1 2048kB 0 / 0 00:04:14.307 00:04:14.307 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.307 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:14.307 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:14.568 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:14.568 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:14.568 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:14.568 19:10:25 -- spdk/autotest.sh@130 -- # uname -s 00:04:14.568 19:10:25 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:14.568 19:10:25 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:14.568 19:10:25 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.101 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.101 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.035 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.035 19:10:28 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:18.972 19:10:29 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:18.972 19:10:29 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:18.972 19:10:29 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.972 19:10:29 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:18.972 19:10:29 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.972 19:10:29 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.972 19:10:29 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.972 19:10:29 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.972 19:10:29 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:19.232 19:10:29 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:19.232 19:10:29 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:19.232 19:10:29 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.134 Waiting for block devices as requested 00:04:21.393 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.393 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.393 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.652 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.652 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.652 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:21.652 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:21.912 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:21.912 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:21.912 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.912 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.170 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.170 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.170 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:22.170 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.430 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.430 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.430 19:10:33 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.430 19:10:33 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:22.430 19:10:33 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:22.430 19:10:33 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:22.430 19:10:33 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:22.430 19:10:33 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:22.430 19:10:33 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:22.430 19:10:33 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:22.430 19:10:33 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:22.430 19:10:33 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:22.430 19:10:33 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:22.430 19:10:33 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:22.430 19:10:33 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:22.430 19:10:33 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:22.430 19:10:33 -- common/autotest_common.sh@1557 -- # continue 00:04:22.430 19:10:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:22.430 19:10:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.430 19:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 19:10:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:22.688 19:10:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.688 19:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 19:10:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.223 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.223 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.792 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.792 19:10:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.792 19:10:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.792 19:10:36 -- common/autotest_common.sh@10 -- # set +x 00:04:25.792 19:10:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.792 19:10:36 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:25.792 19:10:36 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.792 19:10:36 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:25.792 19:10:36 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:25.792 19:10:36 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:25.792 19:10:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:25.792 19:10:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:25.792 19:10:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.792 19:10:36 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.793 19:10:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:26.052 19:10:36 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:26.052 19:10:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:26.052 19:10:36 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:26.052 19:10:36 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:26.052 19:10:36 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:26.052 19:10:36 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:26.052 19:10:36 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:26.052 19:10:36 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:26.052 19:10:36 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:26.052 19:10:36 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1422643 00:04:26.052 19:10:36 -- common/autotest_common.sh@1598 -- # waitforlisten 1422643 00:04:26.052 19:10:36 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.052 19:10:36 -- common/autotest_common.sh@829 -- # '[' -z 1422643 ']' 00:04:26.052 19:10:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.052 19:10:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.052 19:10:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.052 19:10:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.052 19:10:36 -- common/autotest_common.sh@10 -- # set +x 00:04:26.052 [2024-07-15 19:10:36.735279] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:26.052 [2024-07-15 19:10:36.735324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422643 ] 00:04:26.052 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.052 [2024-07-15 19:10:36.761192] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:26.052 [2024-07-15 19:10:36.790713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.052 [2024-07-15 19:10:36.829881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.312 19:10:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.312 19:10:37 -- common/autotest_common.sh@862 -- # return 0 00:04:26.312 19:10:37 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:26.312 19:10:37 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:26.312 19:10:37 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:29.604 nvme0n1 00:04:29.604 19:10:40 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:29.604 [2024-07-15 19:10:40.163797] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:29.604 request: 00:04:29.604 { 00:04:29.604 "nvme_ctrlr_name": "nvme0", 00:04:29.604 "password": "test", 00:04:29.604 "method": "bdev_nvme_opal_revert", 00:04:29.604 "req_id": 1 00:04:29.604 } 00:04:29.604 Got JSON-RPC error response 00:04:29.604 response: 00:04:29.604 { 00:04:29.604 "code": -32602, 00:04:29.604 "message": "Invalid parameters" 00:04:29.604 } 00:04:29.604 19:10:40 -- common/autotest_common.sh@1604 -- # true 00:04:29.604 19:10:40 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:29.604 19:10:40 -- common/autotest_common.sh@1608 -- # killprocess 1422643 00:04:29.604 19:10:40 -- common/autotest_common.sh@948 -- # '[' -z 1422643 ']' 00:04:29.604 19:10:40 -- common/autotest_common.sh@952 -- # kill -0 1422643 00:04:29.604 19:10:40 -- common/autotest_common.sh@953 -- # uname 00:04:29.604 19:10:40 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.604 19:10:40 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1422643 00:04:29.604 19:10:40 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.604 19:10:40 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.604 19:10:40 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1422643' 00:04:29.604 killing process with pid 1422643 00:04:29.604 19:10:40 -- common/autotest_common.sh@967 -- # kill 1422643 00:04:29.604 19:10:40 -- common/autotest_common.sh@972 -- # wait 1422643 00:04:31.012 19:10:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:31.012 19:10:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:31.012 19:10:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.012 19:10:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:31.012 19:10:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:31.012 19:10:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.012 19:10:41 -- common/autotest_common.sh@10 -- # set +x 00:04:31.012 19:10:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:31.012 19:10:41 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:31.012 19:10:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.012 19:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.012 19:10:41 -- common/autotest_common.sh@10 -- # set +x 00:04:31.012 ************************************ 00:04:31.012 START TEST env 00:04:31.012 ************************************ 00:04:31.012 19:10:41 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:31.282 * Looking for test storage... 00:04:31.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:31.282 19:10:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.282 19:10:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.282 19:10:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.282 19:10:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.282 ************************************ 00:04:31.282 START TEST env_memory 00:04:31.282 ************************************ 00:04:31.282 19:10:41 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:31.282 00:04:31.282 00:04:31.282 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.282 http://cunit.sourceforge.net/ 00:04:31.282 00:04:31.282 00:04:31.282 Suite: memory 00:04:31.282 Test: alloc and free memory map ...[2024-07-15 19:10:41.954859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.282 passed 00:04:31.282 Test: mem map translation ...[2024-07-15 19:10:41.973992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.282 [2024-07-15 19:10:41.974010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.282 [2024-07-15 19:10:41.974047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.282 [2024-07-15 19:10:41.974055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.282 passed 00:04:31.282 Test: mem map registration ...[2024-07-15 19:10:42.012758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:31.283 [2024-07-15 19:10:42.012774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:31.283 passed 00:04:31.283 Test: mem map adjacent registrations ...passed 00:04:31.283 00:04:31.283 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.283 suites 1 1 n/a 0 0 00:04:31.283 tests 4 4 4 0 0 00:04:31.283 asserts 152 152 152 0 n/a 00:04:31.283 00:04:31.283 Elapsed time = 0.132 seconds 00:04:31.283 00:04:31.283 real 0m0.139s 00:04:31.283 user 0m0.133s 00:04:31.283 sys 0m0.005s 00:04:31.283 19:10:42 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.283 19:10:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.283 ************************************ 00:04:31.283 END TEST env_memory 00:04:31.283 ************************************ 00:04:31.283 19:10:42 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.283 19:10:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.283 19:10:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.283 19:10:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.283 19:10:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.283 ************************************ 00:04:31.283 START TEST env_vtophys 00:04:31.283 ************************************ 00:04:31.283 19:10:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:31.543 EAL: lib.eal log level changed from notice to debug 00:04:31.543 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.543 EAL: Detected lcore 1 as core 1 on socket 0 00:04:31.543 EAL: Detected lcore 2 as core 2 on socket 0 00:04:31.543 EAL: Detected lcore 3 as core 3 on socket 0 00:04:31.543 EAL: Detected lcore 4 as core 4 on socket 0 00:04:31.543 EAL: Detected lcore 5 as core 5 on socket 0 00:04:31.543 EAL: Detected lcore 6 as core 6 on socket 0 00:04:31.543 EAL: Detected lcore 7 as core 8 on socket 0 00:04:31.543 EAL: Detected lcore 8 as core 9 on socket 0 00:04:31.543 EAL: Detected lcore 9 as core 10 on socket 0 00:04:31.543 EAL: Detected lcore 10 as core 11 on socket 0 00:04:31.543 EAL: Detected lcore 11 as core 12 on socket 0 00:04:31.543 EAL: Detected lcore 12 as core 13 on socket 0 00:04:31.543 EAL: Detected lcore 13 as core 16 on socket 0 00:04:31.543 EAL: Detected lcore 14 as core 17 on socket 0 00:04:31.543 EAL: Detected lcore 15 as core 18 on socket 0 00:04:31.543 EAL: Detected lcore 16 as core 19 on socket 0 00:04:31.543 EAL: Detected lcore 17 as core 20 on socket 0 00:04:31.543 EAL: Detected lcore 18 as core 21 on socket 0 00:04:31.543 EAL: Detected lcore 19 as core 25 on socket 0 00:04:31.543 EAL: Detected lcore 20 as core 26 on socket 0 00:04:31.543 EAL: Detected lcore 21 as core 27 on socket 0 00:04:31.543 EAL: Detected lcore 22 as core 28 on socket 0 00:04:31.543 EAL: Detected lcore 23 as core 29 on socket 0 00:04:31.543 EAL: Detected lcore 24 as core 0 on socket 1 00:04:31.543 EAL: Detected lcore 25 as core 1 on socket 1 00:04:31.543 EAL: Detected lcore 26 as core 2 on socket 1 00:04:31.543 EAL: Detected lcore 27 as core 3 on socket 1 00:04:31.543 EAL: Detected lcore 28 as core 4 on socket 1 00:04:31.543 EAL: Detected lcore 29 as core 5 on socket 1 00:04:31.543 EAL: Detected lcore 30 as core 6 on socket 1 00:04:31.543 EAL: Detected lcore 31 as core 9 on socket 1 00:04:31.543 EAL: Detected lcore 32 as core 10 on socket 1 00:04:31.543 EAL: Detected lcore 33 as core 11 on socket 1 00:04:31.543 EAL: Detected lcore 34 as core 12 on socket 1 00:04:31.543 EAL: Detected lcore 35 as core 13 on socket 1 00:04:31.543 EAL: Detected lcore 36 as core 16 on socket 1 00:04:31.543 EAL: Detected lcore 37 as core 17 on socket 1 00:04:31.543 EAL: Detected lcore 38 as core 18 on socket 1 00:04:31.543 EAL: Detected lcore 39 as core 19 on socket 1 00:04:31.543 EAL: Detected lcore 40 as core 20 on socket 1 00:04:31.543 EAL: Detected lcore 41 as core 21 on socket 1 00:04:31.543 EAL: Detected lcore 42 as core 24 on socket 1 00:04:31.543 EAL: Detected lcore 43 as core 25 on socket 1 00:04:31.543 EAL: Detected lcore 44 as core 26 on socket 1 00:04:31.543 EAL: Detected lcore 45 as core 27 on socket 1 00:04:31.543 EAL: Detected lcore 46 as core 28 on socket 1 00:04:31.543 EAL: Detected lcore 47 as core 29 on socket 1 00:04:31.543 EAL: Detected lcore 48 as core 0 on socket 0 00:04:31.543 EAL: Detected lcore 49 as core 1 on socket 0 00:04:31.543 EAL: Detected lcore 50 as core 2 on socket 0 00:04:31.543 EAL: Detected lcore 51 as core 3 on socket 0 00:04:31.543 EAL: Detected lcore 52 as core 4 on socket 0 00:04:31.543 EAL: Detected lcore 53 as core 5 on socket 0 00:04:31.543 EAL: Detected lcore 54 as core 6 on socket 0 00:04:31.543 EAL: Detected lcore 55 as core 8 on socket 0 00:04:31.543 EAL: Detected lcore 56 as core 9 on socket 0 00:04:31.543 EAL: Detected lcore 57 as core 10 on socket 0 00:04:31.543 EAL: Detected lcore 58 as core 11 on socket 0 00:04:31.543 EAL: Detected lcore 59 as core 12 on socket 0 00:04:31.543 EAL: Detected lcore 60 as core 13 on socket 0 00:04:31.543 EAL: Detected lcore 61 as core 16 on socket 0 00:04:31.543 EAL: Detected lcore 62 as core 17 on socket 0 00:04:31.543 EAL: Detected lcore 63 as core 18 on socket 0 00:04:31.543 EAL: Detected lcore 64 as core 19 on socket 0 00:04:31.543 EAL: Detected lcore 65 as core 20 on socket 0 00:04:31.543 EAL: Detected lcore 66 as core 21 on socket 0 00:04:31.543 EAL: Detected lcore 67 as core 25 on socket 0 00:04:31.543 EAL: Detected lcore 68 as core 26 on socket 0 00:04:31.543 EAL: Detected lcore 69 as core 27 on socket 0 00:04:31.543 EAL: Detected lcore 70 as core 28 on socket 0 00:04:31.543 EAL: Detected lcore 71 as core 29 on socket 0 00:04:31.543 EAL: Detected lcore 72 as core 0 on socket 1 00:04:31.543 EAL: Detected lcore 73 as core 1 on socket 1 00:04:31.543 EAL: Detected lcore 74 as core 2 on socket 1 00:04:31.543 EAL: Detected lcore 75 as core 3 on socket 1 00:04:31.543 EAL: Detected lcore 76 as core 4 on socket 1 00:04:31.543 EAL: Detected lcore 77 as core 5 on socket 1 00:04:31.543 EAL: Detected lcore 78 as core 6 on socket 1 00:04:31.543 EAL: Detected lcore 79 as core 9 on socket 1 00:04:31.543 EAL: Detected lcore 80 as core 10 on socket 1 00:04:31.543 EAL: Detected lcore 81 as core 11 on socket 1 00:04:31.543 EAL: Detected lcore 82 as core 12 on socket 1 00:04:31.543 EAL: Detected lcore 83 as core 13 on socket 1 00:04:31.543 EAL: Detected lcore 84 as core 16 on socket 1 00:04:31.543 EAL: Detected lcore 85 as core 17 on socket 1 00:04:31.543 EAL: Detected lcore 86 as core 18 on socket 1 00:04:31.543 EAL: Detected lcore 87 as core 19 on socket 1 00:04:31.543 EAL: Detected lcore 88 as core 20 on socket 1 00:04:31.543 EAL: Detected lcore 89 as core 21 on socket 1 00:04:31.543 EAL: Detected lcore 90 as core 24 on socket 1 00:04:31.543 EAL: Detected lcore 91 as core 25 on socket 1 00:04:31.543 EAL: Detected lcore 92 as core 26 on socket 1 00:04:31.543 EAL: Detected lcore 93 as core 27 on socket 1 00:04:31.543 EAL: Detected lcore 94 as core 28 on socket 1 00:04:31.543 EAL: Detected lcore 95 as core 29 on socket 1 00:04:31.543 EAL: Maximum logical cores by configuration: 128 00:04:31.543 EAL: Detected CPU lcores: 96 00:04:31.543 EAL: Detected NUMA nodes: 2 00:04:31.543 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:31.543 EAL: Detected shared linkage of DPDK 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:31.543 EAL: Registered [vdev] bus. 00:04:31.543 EAL: bus.vdev log level changed from disabled to notice 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:31.543 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:31.543 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:31.543 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:31.543 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Bus pci wants IOVA as 'DC' 00:04:31.544 EAL: Bus vdev wants IOVA as 'DC' 00:04:31.544 EAL: Buses did not request a specific IOVA mode. 00:04:31.544 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:31.544 EAL: Selected IOVA mode 'VA' 00:04:31.544 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.544 EAL: Probing VFIO support... 00:04:31.544 EAL: IOMMU type 1 (Type 1) is supported 00:04:31.544 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:31.544 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:31.544 EAL: VFIO support initialized 00:04:31.544 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.544 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.544 EAL: Setting up physically contiguous memory... 00:04:31.544 EAL: Setting maximum number of open files to 524288 00:04:31.544 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.544 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:31.544 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.544 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:31.544 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.544 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:31.544 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:31.544 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.544 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:31.544 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:31.544 EAL: Hugepages will be freed exactly as allocated. 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: TSC frequency is ~2300000 KHz 00:04:31.544 EAL: Main lcore 0 is ready (tid=7f388eb72a00;cpuset=[0]) 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 0 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.544 00:04:31.544 00:04:31.544 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.544 http://cunit.sourceforge.net/ 00:04:31.544 00:04:31.544 00:04:31.544 Suite: components_suite 00:04:31.544 Test: vtophys_malloc_test ...passed 00:04:31.544 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.544 EAL: Trying to obtain current memory policy. 00:04:31.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.544 EAL: Restoring previous memory policy: 4 00:04:31.544 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.544 EAL: request: mp_malloc_sync 00:04:31.544 EAL: No shared files mode enabled, IPC is disabled 00:04:31.544 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.545 EAL: request: mp_malloc_sync 00:04:31.545 EAL: No shared files mode enabled, IPC is disabled 00:04:31.545 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.545 EAL: Trying to obtain current memory policy. 00:04:31.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.545 EAL: Restoring previous memory policy: 4 00:04:31.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.545 EAL: request: mp_malloc_sync 00:04:31.545 EAL: No shared files mode enabled, IPC is disabled 00:04:31.545 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.804 EAL: request: mp_malloc_sync 00:04:31.804 EAL: No shared files mode enabled, IPC is disabled 00:04:31.804 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.804 EAL: Trying to obtain current memory policy. 00:04:31.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.804 EAL: Restoring previous memory policy: 4 00:04:31.804 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.804 EAL: request: mp_malloc_sync 00:04:31.804 EAL: No shared files mode enabled, IPC is disabled 00:04:31.804 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.804 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.063 EAL: request: mp_malloc_sync 00:04:32.063 EAL: No shared files mode enabled, IPC is disabled 00:04:32.063 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.063 EAL: Trying to obtain current memory policy. 00:04:32.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.063 EAL: Restoring previous memory policy: 4 00:04:32.063 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.063 EAL: request: mp_malloc_sync 00:04:32.063 EAL: No shared files mode enabled, IPC is disabled 00:04:32.063 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.322 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.601 EAL: request: mp_malloc_sync 00:04:32.602 EAL: No shared files mode enabled, IPC is disabled 00:04:32.602 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.602 passed 00:04:32.602 00:04:32.602 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.602 suites 1 1 n/a 0 0 00:04:32.602 tests 2 2 2 0 0 00:04:32.602 asserts 497 497 497 0 n/a 00:04:32.602 00:04:32.602 Elapsed time = 0.957 seconds 00:04:32.602 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.602 EAL: request: mp_malloc_sync 00:04:32.602 EAL: No shared files mode enabled, IPC is disabled 00:04:32.602 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.602 EAL: No shared files mode enabled, IPC is disabled 00:04:32.602 EAL: No shared files mode enabled, IPC is disabled 00:04:32.602 EAL: No shared files mode enabled, IPC is disabled 00:04:32.602 00:04:32.602 real 0m1.062s 00:04:32.602 user 0m0.623s 00:04:32.602 sys 0m0.414s 00:04:32.602 19:10:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.602 19:10:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:32.602 ************************************ 00:04:32.602 END TEST env_vtophys 00:04:32.602 ************************************ 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.602 19:10:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.602 19:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.602 ************************************ 00:04:32.602 START TEST env_pci 00:04:32.602 ************************************ 00:04:32.602 19:10:43 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:32.602 00:04:32.602 00:04:32.602 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.602 http://cunit.sourceforge.net/ 00:04:32.602 00:04:32.602 00:04:32.602 Suite: pci 00:04:32.602 Test: pci_hook ...[2024-07-15 19:10:43.252742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1423739 has claimed it 00:04:32.602 EAL: Cannot find device (10000:00:01.0) 00:04:32.602 EAL: Failed to attach device on primary process 00:04:32.602 passed 00:04:32.602 00:04:32.602 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.602 suites 1 1 n/a 0 0 00:04:32.602 tests 1 1 1 0 0 00:04:32.602 asserts 25 25 25 0 n/a 00:04:32.602 00:04:32.602 Elapsed time = 0.025 seconds 00:04:32.602 00:04:32.602 real 0m0.039s 00:04:32.602 user 0m0.012s 00:04:32.602 sys 0m0.026s 00:04:32.602 19:10:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.602 19:10:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:32.602 ************************************ 00:04:32.602 END TEST env_pci 00:04:32.602 ************************************ 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.602 19:10:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.602 19:10:43 env -- env/env.sh@15 -- # uname 00:04:32.602 19:10:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.602 19:10:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.602 19:10:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:32.602 19:10:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.602 19:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.602 ************************************ 00:04:32.602 START TEST env_dpdk_post_init 00:04:32.602 ************************************ 00:04:32.602 19:10:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.602 EAL: Detected CPU lcores: 96 00:04:32.602 EAL: Detected NUMA nodes: 2 00:04:32.602 EAL: Detected shared linkage of DPDK 00:04:32.602 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.602 EAL: Selected IOVA mode 'VA' 00:04:32.602 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.602 EAL: VFIO support initialized 00:04:32.602 EAL: Using IOMMU type 1 (Type 1) 00:04:37.880 Starting DPDK initialization... 00:04:37.880 Starting SPDK post initialization... 00:04:37.880 SPDK NVMe probe 00:04:37.880 Attaching to 0000:5e:00.0 00:04:37.880 Attached to 0000:5e:00.0 00:04:37.880 Cleaning up... 00:04:37.880 00:04:37.880 real 0m4.319s 00:04:37.880 user 0m3.273s 00:04:37.880 sys 0m0.115s 00:04:37.880 19:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.880 19:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 END TEST env_dpdk_post_init 00:04:37.880 ************************************ 00:04:37.880 19:10:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.880 19:10:47 env -- env/env.sh@26 -- # uname 00:04:37.880 19:10:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.880 19:10:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.880 19:10:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.880 19:10:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.880 19:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 START TEST env_mem_callbacks 00:04:37.880 ************************************ 00:04:37.880 19:10:47 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.880 EAL: Detected CPU lcores: 96 00:04:37.880 EAL: Detected NUMA nodes: 2 00:04:37.880 EAL: Detected shared linkage of DPDK 00:04:37.880 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.880 EAL: Selected IOVA mode 'VA' 00:04:37.880 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.880 EAL: VFIO support initialized 00:04:37.880 00:04:37.880 00:04:37.880 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.880 http://cunit.sourceforge.net/ 00:04:37.880 00:04:37.880 00:04:37.880 Suite: memory 00:04:37.880 Test: test ... 00:04:37.880 register 0x200000200000 2097152 00:04:37.880 malloc 3145728 00:04:37.880 register 0x200000400000 4194304 00:04:37.880 buf 0x200000500000 len 3145728 PASSED 00:04:37.880 malloc 64 00:04:37.880 buf 0x2000004fff40 len 64 PASSED 00:04:37.880 malloc 4194304 00:04:37.880 register 0x200000800000 6291456 00:04:37.880 buf 0x200000a00000 len 4194304 PASSED 00:04:37.880 free 0x200000500000 3145728 00:04:37.880 free 0x2000004fff40 64 00:04:37.880 unregister 0x200000400000 4194304 PASSED 00:04:37.880 free 0x200000a00000 4194304 00:04:37.880 unregister 0x200000800000 6291456 PASSED 00:04:37.880 malloc 8388608 00:04:37.880 register 0x200000400000 10485760 00:04:37.880 buf 0x200000600000 len 8388608 PASSED 00:04:37.880 free 0x200000600000 8388608 00:04:37.880 unregister 0x200000400000 10485760 PASSED 00:04:37.880 passed 00:04:37.880 00:04:37.880 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.880 suites 1 1 n/a 0 0 00:04:37.880 tests 1 1 1 0 0 00:04:37.880 asserts 15 15 15 0 n/a 00:04:37.880 00:04:37.880 Elapsed time = 0.005 seconds 00:04:37.880 00:04:37.880 real 0m0.051s 00:04:37.880 user 0m0.015s 00:04:37.880 sys 0m0.036s 00:04:37.880 19:10:47 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.880 19:10:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 END TEST env_mem_callbacks 00:04:37.880 ************************************ 00:04:37.880 19:10:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:37.880 00:04:37.880 real 0m5.961s 00:04:37.880 user 0m4.174s 00:04:37.880 sys 0m0.856s 00:04:37.880 19:10:47 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.880 19:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 END TEST env 00:04:37.880 ************************************ 00:04:37.880 19:10:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.880 19:10:47 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.880 19:10:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.880 19:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.880 19:10:47 -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 START TEST rpc 00:04:37.880 ************************************ 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:37.880 * Looking for test storage... 00:04:37.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.880 19:10:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1424775 00:04:37.880 19:10:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.880 19:10:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1424775 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@829 -- # '[' -z 1424775 ']' 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.880 19:10:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 19:10:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:37.880 [2024-07-15 19:10:48.001432] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:37.880 [2024-07-15 19:10:48.001475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424775 ] 00:04:37.880 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.880 [2024-07-15 19:10:48.027500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:37.880 [2024-07-15 19:10:48.055117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.880 [2024-07-15 19:10:48.095773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.880 [2024-07-15 19:10:48.095811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1424775' to capture a snapshot of events at runtime. 00:04:37.880 [2024-07-15 19:10:48.095818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.880 [2024-07-15 19:10:48.095823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.880 [2024-07-15 19:10:48.095829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1424775 for offline analysis/debug. 00:04:37.880 [2024-07-15 19:10:48.095847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.880 19:10:48 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.880 19:10:48 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.880 19:10:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.880 19:10:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.880 19:10:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.880 19:10:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.880 19:10:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.880 19:10:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.880 19:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 ************************************ 00:04:37.880 START TEST rpc_integrity 00:04:37.880 ************************************ 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.880 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.880 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.880 { 00:04:37.880 "name": "Malloc0", 00:04:37.880 "aliases": [ 00:04:37.880 "1bb984ea-b057-4db2-b246-32ebd01dcb6b" 00:04:37.880 ], 00:04:37.880 "product_name": "Malloc disk", 00:04:37.880 "block_size": 512, 00:04:37.880 "num_blocks": 16384, 00:04:37.880 "uuid": "1bb984ea-b057-4db2-b246-32ebd01dcb6b", 00:04:37.880 "assigned_rate_limits": { 00:04:37.880 "rw_ios_per_sec": 0, 00:04:37.880 "rw_mbytes_per_sec": 0, 00:04:37.880 "r_mbytes_per_sec": 0, 00:04:37.880 "w_mbytes_per_sec": 0 00:04:37.880 }, 00:04:37.880 "claimed": false, 00:04:37.880 "zoned": false, 00:04:37.880 "supported_io_types": { 00:04:37.880 "read": true, 00:04:37.880 "write": true, 00:04:37.880 "unmap": true, 00:04:37.880 "flush": true, 00:04:37.880 "reset": true, 00:04:37.880 "nvme_admin": false, 00:04:37.880 "nvme_io": false, 00:04:37.880 "nvme_io_md": false, 00:04:37.880 "write_zeroes": true, 00:04:37.880 "zcopy": true, 00:04:37.880 "get_zone_info": false, 00:04:37.880 "zone_management": false, 00:04:37.880 "zone_append": false, 00:04:37.880 "compare": false, 00:04:37.881 "compare_and_write": false, 00:04:37.881 "abort": true, 00:04:37.881 "seek_hole": false, 00:04:37.881 "seek_data": false, 00:04:37.881 "copy": true, 00:04:37.881 "nvme_iov_md": false 00:04:37.881 }, 00:04:37.881 "memory_domains": [ 00:04:37.881 { 00:04:37.881 "dma_device_id": "system", 00:04:37.881 "dma_device_type": 1 00:04:37.881 }, 00:04:37.881 { 00:04:37.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.881 "dma_device_type": 2 00:04:37.881 } 00:04:37.881 ], 00:04:37.881 "driver_specific": {} 00:04:37.881 } 00:04:37.881 ]' 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 [2024-07-15 19:10:48.429076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.881 [2024-07-15 19:10:48.429105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.881 [2024-07-15 19:10:48.429121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1329b80 00:04:37.881 [2024-07-15 19:10:48.429127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.881 [2024-07-15 19:10:48.430113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.881 [2024-07-15 19:10:48.430133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.881 Passthru0 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.881 { 00:04:37.881 "name": "Malloc0", 00:04:37.881 "aliases": [ 00:04:37.881 "1bb984ea-b057-4db2-b246-32ebd01dcb6b" 00:04:37.881 ], 00:04:37.881 "product_name": "Malloc disk", 00:04:37.881 "block_size": 512, 00:04:37.881 "num_blocks": 16384, 00:04:37.881 "uuid": "1bb984ea-b057-4db2-b246-32ebd01dcb6b", 00:04:37.881 "assigned_rate_limits": { 00:04:37.881 "rw_ios_per_sec": 0, 00:04:37.881 "rw_mbytes_per_sec": 0, 00:04:37.881 "r_mbytes_per_sec": 0, 00:04:37.881 "w_mbytes_per_sec": 0 00:04:37.881 }, 00:04:37.881 "claimed": true, 00:04:37.881 "claim_type": "exclusive_write", 00:04:37.881 "zoned": false, 00:04:37.881 "supported_io_types": { 00:04:37.881 "read": true, 00:04:37.881 "write": true, 00:04:37.881 "unmap": true, 00:04:37.881 "flush": true, 00:04:37.881 "reset": true, 00:04:37.881 "nvme_admin": false, 00:04:37.881 "nvme_io": false, 00:04:37.881 "nvme_io_md": false, 00:04:37.881 "write_zeroes": true, 00:04:37.881 "zcopy": true, 00:04:37.881 "get_zone_info": false, 00:04:37.881 "zone_management": false, 00:04:37.881 "zone_append": false, 00:04:37.881 "compare": false, 00:04:37.881 "compare_and_write": false, 00:04:37.881 "abort": true, 00:04:37.881 "seek_hole": false, 00:04:37.881 "seek_data": false, 00:04:37.881 "copy": true, 00:04:37.881 "nvme_iov_md": false 00:04:37.881 }, 00:04:37.881 "memory_domains": [ 00:04:37.881 { 00:04:37.881 "dma_device_id": "system", 00:04:37.881 "dma_device_type": 1 00:04:37.881 }, 00:04:37.881 { 00:04:37.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.881 "dma_device_type": 2 00:04:37.881 } 00:04:37.881 ], 00:04:37.881 "driver_specific": {} 00:04:37.881 }, 00:04:37.881 { 00:04:37.881 "name": "Passthru0", 00:04:37.881 "aliases": [ 00:04:37.881 "e3b79780-98a5-5dcf-badb-f5856db473db" 00:04:37.881 ], 00:04:37.881 "product_name": "passthru", 00:04:37.881 "block_size": 512, 00:04:37.881 "num_blocks": 16384, 00:04:37.881 "uuid": "e3b79780-98a5-5dcf-badb-f5856db473db", 00:04:37.881 "assigned_rate_limits": { 00:04:37.881 "rw_ios_per_sec": 0, 00:04:37.881 "rw_mbytes_per_sec": 0, 00:04:37.881 "r_mbytes_per_sec": 0, 00:04:37.881 "w_mbytes_per_sec": 0 00:04:37.881 }, 00:04:37.881 "claimed": false, 00:04:37.881 "zoned": false, 00:04:37.881 "supported_io_types": { 00:04:37.881 "read": true, 00:04:37.881 "write": true, 00:04:37.881 "unmap": true, 00:04:37.881 "flush": true, 00:04:37.881 "reset": true, 00:04:37.881 "nvme_admin": false, 00:04:37.881 "nvme_io": false, 00:04:37.881 "nvme_io_md": false, 00:04:37.881 "write_zeroes": true, 00:04:37.881 "zcopy": true, 00:04:37.881 "get_zone_info": false, 00:04:37.881 "zone_management": false, 00:04:37.881 "zone_append": false, 00:04:37.881 "compare": false, 00:04:37.881 "compare_and_write": false, 00:04:37.881 "abort": true, 00:04:37.881 "seek_hole": false, 00:04:37.881 "seek_data": false, 00:04:37.881 "copy": true, 00:04:37.881 "nvme_iov_md": false 00:04:37.881 }, 00:04:37.881 "memory_domains": [ 00:04:37.881 { 00:04:37.881 "dma_device_id": "system", 00:04:37.881 "dma_device_type": 1 00:04:37.881 }, 00:04:37.881 { 00:04:37.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.881 "dma_device_type": 2 00:04:37.881 } 00:04:37.881 ], 00:04:37.881 "driver_specific": { 00:04:37.881 "passthru": { 00:04:37.881 "name": "Passthru0", 00:04:37.881 "base_bdev_name": "Malloc0" 00:04:37.881 } 00:04:37.881 } 00:04:37.881 } 00:04:37.881 ]' 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.881 19:10:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.881 00:04:37.881 real 0m0.259s 00:04:37.881 user 0m0.164s 00:04:37.881 sys 0m0.030s 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 ************************************ 00:04:37.881 END TEST rpc_integrity 00:04:37.881 ************************************ 00:04:37.881 19:10:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.881 19:10:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.881 19:10:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.881 19:10:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.881 19:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 ************************************ 00:04:37.881 START TEST rpc_plugins 00:04:37.881 ************************************ 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:37.881 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.881 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.881 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.881 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.881 { 00:04:37.881 "name": "Malloc1", 00:04:37.881 "aliases": [ 00:04:37.881 "7bd3bad8-fa43-4931-a6a5-db621e444d9e" 00:04:37.881 ], 00:04:37.881 "product_name": "Malloc disk", 00:04:37.881 "block_size": 4096, 00:04:37.881 "num_blocks": 256, 00:04:37.881 "uuid": "7bd3bad8-fa43-4931-a6a5-db621e444d9e", 00:04:37.881 "assigned_rate_limits": { 00:04:37.881 "rw_ios_per_sec": 0, 00:04:37.881 "rw_mbytes_per_sec": 0, 00:04:37.881 "r_mbytes_per_sec": 0, 00:04:37.881 "w_mbytes_per_sec": 0 00:04:37.881 }, 00:04:37.881 "claimed": false, 00:04:37.881 "zoned": false, 00:04:37.881 "supported_io_types": { 00:04:37.881 "read": true, 00:04:37.881 "write": true, 00:04:37.881 "unmap": true, 00:04:37.881 "flush": true, 00:04:37.881 "reset": true, 00:04:37.881 "nvme_admin": false, 00:04:37.881 "nvme_io": false, 00:04:37.881 "nvme_io_md": false, 00:04:37.881 "write_zeroes": true, 00:04:37.881 "zcopy": true, 00:04:37.881 "get_zone_info": false, 00:04:37.881 "zone_management": false, 00:04:37.881 "zone_append": false, 00:04:37.881 "compare": false, 00:04:37.881 "compare_and_write": false, 00:04:37.881 "abort": true, 00:04:37.881 "seek_hole": false, 00:04:37.881 "seek_data": false, 00:04:37.881 "copy": true, 00:04:37.881 "nvme_iov_md": false 00:04:37.881 }, 00:04:37.881 "memory_domains": [ 00:04:37.881 { 00:04:37.881 "dma_device_id": "system", 00:04:37.881 "dma_device_type": 1 00:04:37.881 }, 00:04:37.881 { 00:04:37.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.881 "dma_device_type": 2 00:04:37.881 } 00:04:37.881 ], 00:04:37.881 "driver_specific": {} 00:04:37.881 } 00:04:37.881 ]' 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.882 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.882 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.141 19:10:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.141 00:04:38.141 real 0m0.130s 00:04:38.141 user 0m0.084s 00:04:38.141 sys 0m0.013s 00:04:38.141 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.141 19:10:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.141 ************************************ 00:04:38.141 END TEST rpc_plugins 00:04:38.141 ************************************ 00:04:38.141 19:10:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.141 19:10:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.141 19:10:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.141 19:10:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.141 19:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.141 ************************************ 00:04:38.141 START TEST rpc_trace_cmd_test 00:04:38.141 ************************************ 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.141 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1424775", 00:04:38.141 "tpoint_group_mask": "0x8", 00:04:38.141 "iscsi_conn": { 00:04:38.141 "mask": "0x2", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "scsi": { 00:04:38.141 "mask": "0x4", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "bdev": { 00:04:38.141 "mask": "0x8", 00:04:38.141 "tpoint_mask": "0xffffffffffffffff" 00:04:38.141 }, 00:04:38.141 "nvmf_rdma": { 00:04:38.141 "mask": "0x10", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "nvmf_tcp": { 00:04:38.141 "mask": "0x20", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "ftl": { 00:04:38.141 "mask": "0x40", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "blobfs": { 00:04:38.141 "mask": "0x80", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "dsa": { 00:04:38.141 "mask": "0x200", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "thread": { 00:04:38.141 "mask": "0x400", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "nvme_pcie": { 00:04:38.141 "mask": "0x800", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "iaa": { 00:04:38.141 "mask": "0x1000", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "nvme_tcp": { 00:04:38.141 "mask": "0x2000", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "bdev_nvme": { 00:04:38.141 "mask": "0x4000", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 }, 00:04:38.141 "sock": { 00:04:38.141 "mask": "0x8000", 00:04:38.141 "tpoint_mask": "0x0" 00:04:38.141 } 00:04:38.141 }' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.141 19:10:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.400 19:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.400 00:04:38.400 real 0m0.212s 00:04:38.400 user 0m0.180s 00:04:38.400 sys 0m0.022s 00:04:38.401 19:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 ************************************ 00:04:38.401 END TEST rpc_trace_cmd_test 00:04:38.401 ************************************ 00:04:38.401 19:10:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.401 19:10:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.401 19:10:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.401 19:10:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.401 19:10:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.401 19:10:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.401 19:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 ************************************ 00:04:38.401 START TEST rpc_daemon_integrity 00:04:38.401 ************************************ 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.401 { 00:04:38.401 "name": "Malloc2", 00:04:38.401 "aliases": [ 00:04:38.401 "835e7960-c142-48a9-bf86-78f62537b918" 00:04:38.401 ], 00:04:38.401 "product_name": "Malloc disk", 00:04:38.401 "block_size": 512, 00:04:38.401 "num_blocks": 16384, 00:04:38.401 "uuid": "835e7960-c142-48a9-bf86-78f62537b918", 00:04:38.401 "assigned_rate_limits": { 00:04:38.401 "rw_ios_per_sec": 0, 00:04:38.401 "rw_mbytes_per_sec": 0, 00:04:38.401 "r_mbytes_per_sec": 0, 00:04:38.401 "w_mbytes_per_sec": 0 00:04:38.401 }, 00:04:38.401 "claimed": false, 00:04:38.401 "zoned": false, 00:04:38.401 "supported_io_types": { 00:04:38.401 "read": true, 00:04:38.401 "write": true, 00:04:38.401 "unmap": true, 00:04:38.401 "flush": true, 00:04:38.401 "reset": true, 00:04:38.401 "nvme_admin": false, 00:04:38.401 "nvme_io": false, 00:04:38.401 "nvme_io_md": false, 00:04:38.401 "write_zeroes": true, 00:04:38.401 "zcopy": true, 00:04:38.401 "get_zone_info": false, 00:04:38.401 "zone_management": false, 00:04:38.401 "zone_append": false, 00:04:38.401 "compare": false, 00:04:38.401 "compare_and_write": false, 00:04:38.401 "abort": true, 00:04:38.401 "seek_hole": false, 00:04:38.401 "seek_data": false, 00:04:38.401 "copy": true, 00:04:38.401 "nvme_iov_md": false 00:04:38.401 }, 00:04:38.401 "memory_domains": [ 00:04:38.401 { 00:04:38.401 "dma_device_id": "system", 00:04:38.401 "dma_device_type": 1 00:04:38.401 }, 00:04:38.401 { 00:04:38.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.401 "dma_device_type": 2 00:04:38.401 } 00:04:38.401 ], 00:04:38.401 "driver_specific": {} 00:04:38.401 } 00:04:38.401 ]' 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 [2024-07-15 19:10:49.211186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.401 [2024-07-15 19:10:49.211212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.401 [2024-07-15 19:10:49.211228] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c2a30 00:04:38.401 [2024-07-15 19:10:49.211234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.401 [2024-07-15 19:10:49.212165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.401 [2024-07-15 19:10:49.212184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.401 Passthru0 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.401 { 00:04:38.401 "name": "Malloc2", 00:04:38.401 "aliases": [ 00:04:38.401 "835e7960-c142-48a9-bf86-78f62537b918" 00:04:38.401 ], 00:04:38.401 "product_name": "Malloc disk", 00:04:38.401 "block_size": 512, 00:04:38.401 "num_blocks": 16384, 00:04:38.401 "uuid": "835e7960-c142-48a9-bf86-78f62537b918", 00:04:38.401 "assigned_rate_limits": { 00:04:38.401 "rw_ios_per_sec": 0, 00:04:38.401 "rw_mbytes_per_sec": 0, 00:04:38.401 "r_mbytes_per_sec": 0, 00:04:38.401 "w_mbytes_per_sec": 0 00:04:38.401 }, 00:04:38.401 "claimed": true, 00:04:38.401 "claim_type": "exclusive_write", 00:04:38.401 "zoned": false, 00:04:38.401 "supported_io_types": { 00:04:38.401 "read": true, 00:04:38.401 "write": true, 00:04:38.401 "unmap": true, 00:04:38.401 "flush": true, 00:04:38.401 "reset": true, 00:04:38.401 "nvme_admin": false, 00:04:38.401 "nvme_io": false, 00:04:38.401 "nvme_io_md": false, 00:04:38.401 "write_zeroes": true, 00:04:38.401 "zcopy": true, 00:04:38.401 "get_zone_info": false, 00:04:38.401 "zone_management": false, 00:04:38.401 "zone_append": false, 00:04:38.401 "compare": false, 00:04:38.401 "compare_and_write": false, 00:04:38.401 "abort": true, 00:04:38.401 "seek_hole": false, 00:04:38.401 "seek_data": false, 00:04:38.401 "copy": true, 00:04:38.401 "nvme_iov_md": false 00:04:38.401 }, 00:04:38.401 "memory_domains": [ 00:04:38.401 { 00:04:38.401 "dma_device_id": "system", 00:04:38.401 "dma_device_type": 1 00:04:38.401 }, 00:04:38.401 { 00:04:38.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.401 "dma_device_type": 2 00:04:38.401 } 00:04:38.401 ], 00:04:38.401 "driver_specific": {} 00:04:38.401 }, 00:04:38.401 { 00:04:38.401 "name": "Passthru0", 00:04:38.401 "aliases": [ 00:04:38.401 "b8307cf9-24a7-5800-88ca-dcb2f5a1d2a9" 00:04:38.401 ], 00:04:38.401 "product_name": "passthru", 00:04:38.401 "block_size": 512, 00:04:38.401 "num_blocks": 16384, 00:04:38.401 "uuid": "b8307cf9-24a7-5800-88ca-dcb2f5a1d2a9", 00:04:38.401 "assigned_rate_limits": { 00:04:38.401 "rw_ios_per_sec": 0, 00:04:38.401 "rw_mbytes_per_sec": 0, 00:04:38.401 "r_mbytes_per_sec": 0, 00:04:38.401 "w_mbytes_per_sec": 0 00:04:38.401 }, 00:04:38.401 "claimed": false, 00:04:38.401 "zoned": false, 00:04:38.401 "supported_io_types": { 00:04:38.401 "read": true, 00:04:38.401 "write": true, 00:04:38.401 "unmap": true, 00:04:38.401 "flush": true, 00:04:38.401 "reset": true, 00:04:38.401 "nvme_admin": false, 00:04:38.401 "nvme_io": false, 00:04:38.401 "nvme_io_md": false, 00:04:38.401 "write_zeroes": true, 00:04:38.401 "zcopy": true, 00:04:38.401 "get_zone_info": false, 00:04:38.401 "zone_management": false, 00:04:38.401 "zone_append": false, 00:04:38.401 "compare": false, 00:04:38.401 "compare_and_write": false, 00:04:38.401 "abort": true, 00:04:38.401 "seek_hole": false, 00:04:38.401 "seek_data": false, 00:04:38.401 "copy": true, 00:04:38.401 "nvme_iov_md": false 00:04:38.401 }, 00:04:38.401 "memory_domains": [ 00:04:38.401 { 00:04:38.401 "dma_device_id": "system", 00:04:38.401 "dma_device_type": 1 00:04:38.401 }, 00:04:38.401 { 00:04:38.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.401 "dma_device_type": 2 00:04:38.401 } 00:04:38.401 ], 00:04:38.401 "driver_specific": { 00:04:38.401 "passthru": { 00:04:38.401 "name": "Passthru0", 00:04:38.401 "base_bdev_name": "Malloc2" 00:04:38.401 } 00:04:38.401 } 00:04:38.401 } 00:04:38.401 ]' 00:04:38.401 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.661 00:04:38.661 real 0m0.255s 00:04:38.661 user 0m0.162s 00:04:38.661 sys 0m0.030s 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.661 19:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.661 ************************************ 00:04:38.661 END TEST rpc_daemon_integrity 00:04:38.661 ************************************ 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.661 19:10:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.661 19:10:49 rpc -- rpc/rpc.sh@84 -- # killprocess 1424775 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@948 -- # '[' -z 1424775 ']' 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@952 -- # kill -0 1424775 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1424775 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1424775' 00:04:38.661 killing process with pid 1424775 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@967 -- # kill 1424775 00:04:38.661 19:10:49 rpc -- common/autotest_common.sh@972 -- # wait 1424775 00:04:38.920 00:04:38.920 real 0m1.843s 00:04:38.920 user 0m2.401s 00:04:38.920 sys 0m0.583s 00:04:38.920 19:10:49 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.920 19:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.920 ************************************ 00:04:38.920 END TEST rpc 00:04:38.920 ************************************ 00:04:38.920 19:10:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.920 19:10:49 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.920 19:10:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.920 19:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.920 19:10:49 -- common/autotest_common.sh@10 -- # set +x 00:04:38.920 ************************************ 00:04:38.920 START TEST skip_rpc 00:04:38.920 ************************************ 00:04:38.920 19:10:49 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.179 * Looking for test storage... 00:04:39.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.179 19:10:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.179 19:10:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.179 19:10:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.179 19:10:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.179 19:10:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.179 19:10:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.179 ************************************ 00:04:39.179 START TEST skip_rpc 00:04:39.179 ************************************ 00:04:39.179 19:10:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:39.179 19:10:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1425187 00:04:39.179 19:10:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.179 19:10:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.179 19:10:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.179 [2024-07-15 19:10:49.930979] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:39.179 [2024-07-15 19:10:49.931020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425187 ] 00:04:39.179 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.179 [2024-07-15 19:10:49.956646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:39.179 [2024-07-15 19:10:49.985644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.179 [2024-07-15 19:10:50.026739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1425187 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1425187 ']' 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1425187 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1425187 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1425187' 00:04:44.454 killing process with pid 1425187 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1425187 00:04:44.454 19:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1425187 00:04:44.454 00:04:44.454 real 0m5.353s 00:04:44.454 user 0m5.143s 00:04:44.454 sys 0m0.242s 00:04:44.454 19:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.454 19:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.454 ************************************ 00:04:44.454 END TEST skip_rpc 00:04:44.454 ************************************ 00:04:44.454 19:10:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.454 19:10:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.454 19:10:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.454 19:10:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.454 19:10:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.454 ************************************ 00:04:44.454 START TEST skip_rpc_with_json 00:04:44.454 ************************************ 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1426128 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1426128 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1426128 ']' 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.454 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.713 [2024-07-15 19:10:55.348778] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:44.713 [2024-07-15 19:10:55.348820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426128 ] 00:04:44.713 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.713 [2024-07-15 19:10:55.374468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:44.713 [2024-07-15 19:10:55.402946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.713 [2024-07-15 19:10:55.442296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 [2024-07-15 19:10:55.628439] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.972 request: 00:04:44.972 { 00:04:44.972 "trtype": "tcp", 00:04:44.972 "method": "nvmf_get_transports", 00:04:44.972 "req_id": 1 00:04:44.972 } 00:04:44.972 Got JSON-RPC error response 00:04:44.972 response: 00:04:44.972 { 00:04:44.972 "code": -19, 00:04:44.972 "message": "No such device" 00:04:44.972 } 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 [2024-07-15 19:10:55.640547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.972 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.972 { 00:04:44.972 "subsystems": [ 00:04:44.972 { 00:04:44.972 "subsystem": "vfio_user_target", 00:04:44.972 "config": null 00:04:44.972 }, 00:04:44.972 { 00:04:44.972 "subsystem": "keyring", 00:04:44.972 "config": [] 00:04:44.972 }, 00:04:44.972 { 00:04:44.972 "subsystem": "iobuf", 00:04:44.972 "config": [ 00:04:44.972 { 00:04:44.972 "method": "iobuf_set_options", 00:04:44.972 "params": { 00:04:44.972 "small_pool_count": 8192, 00:04:44.972 "large_pool_count": 1024, 00:04:44.972 "small_bufsize": 8192, 00:04:44.972 "large_bufsize": 135168 00:04:44.972 } 00:04:44.972 } 00:04:44.972 ] 00:04:44.972 }, 00:04:44.972 { 00:04:44.972 "subsystem": "sock", 00:04:44.972 "config": [ 00:04:44.972 { 00:04:44.972 "method": "sock_set_default_impl", 00:04:44.972 "params": { 00:04:44.972 "impl_name": "posix" 00:04:44.972 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "sock_impl_set_options", 00:04:44.973 "params": { 00:04:44.973 "impl_name": "ssl", 00:04:44.973 "recv_buf_size": 4096, 00:04:44.973 "send_buf_size": 4096, 00:04:44.973 "enable_recv_pipe": true, 00:04:44.973 "enable_quickack": false, 00:04:44.973 "enable_placement_id": 0, 00:04:44.973 "enable_zerocopy_send_server": true, 00:04:44.973 "enable_zerocopy_send_client": false, 00:04:44.973 "zerocopy_threshold": 0, 00:04:44.973 "tls_version": 0, 00:04:44.973 "enable_ktls": false 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "sock_impl_set_options", 00:04:44.973 "params": { 00:04:44.973 "impl_name": "posix", 00:04:44.973 "recv_buf_size": 2097152, 00:04:44.973 "send_buf_size": 2097152, 00:04:44.973 "enable_recv_pipe": true, 00:04:44.973 "enable_quickack": false, 00:04:44.973 "enable_placement_id": 0, 00:04:44.973 "enable_zerocopy_send_server": true, 00:04:44.973 "enable_zerocopy_send_client": false, 00:04:44.973 "zerocopy_threshold": 0, 00:04:44.973 "tls_version": 0, 00:04:44.973 "enable_ktls": false 00:04:44.973 } 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "vmd", 00:04:44.973 "config": [] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "accel", 00:04:44.973 "config": [ 00:04:44.973 { 00:04:44.973 "method": "accel_set_options", 00:04:44.973 "params": { 00:04:44.973 "small_cache_size": 128, 00:04:44.973 "large_cache_size": 16, 00:04:44.973 "task_count": 2048, 00:04:44.973 "sequence_count": 2048, 00:04:44.973 "buf_count": 2048 00:04:44.973 } 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "bdev", 00:04:44.973 "config": [ 00:04:44.973 { 00:04:44.973 "method": "bdev_set_options", 00:04:44.973 "params": { 00:04:44.973 "bdev_io_pool_size": 65535, 00:04:44.973 "bdev_io_cache_size": 256, 00:04:44.973 "bdev_auto_examine": true, 00:04:44.973 "iobuf_small_cache_size": 128, 00:04:44.973 "iobuf_large_cache_size": 16 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "bdev_raid_set_options", 00:04:44.973 "params": { 00:04:44.973 "process_window_size_kb": 1024 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "bdev_iscsi_set_options", 00:04:44.973 "params": { 00:04:44.973 "timeout_sec": 30 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "bdev_nvme_set_options", 00:04:44.973 "params": { 00:04:44.973 "action_on_timeout": "none", 00:04:44.973 "timeout_us": 0, 00:04:44.973 "timeout_admin_us": 0, 00:04:44.973 "keep_alive_timeout_ms": 10000, 00:04:44.973 "arbitration_burst": 0, 00:04:44.973 "low_priority_weight": 0, 00:04:44.973 "medium_priority_weight": 0, 00:04:44.973 "high_priority_weight": 0, 00:04:44.973 "nvme_adminq_poll_period_us": 10000, 00:04:44.973 "nvme_ioq_poll_period_us": 0, 00:04:44.973 "io_queue_requests": 0, 00:04:44.973 "delay_cmd_submit": true, 00:04:44.973 "transport_retry_count": 4, 00:04:44.973 "bdev_retry_count": 3, 00:04:44.973 "transport_ack_timeout": 0, 00:04:44.973 "ctrlr_loss_timeout_sec": 0, 00:04:44.973 "reconnect_delay_sec": 0, 00:04:44.973 "fast_io_fail_timeout_sec": 0, 00:04:44.973 "disable_auto_failback": false, 00:04:44.973 "generate_uuids": false, 00:04:44.973 "transport_tos": 0, 00:04:44.973 "nvme_error_stat": false, 00:04:44.973 "rdma_srq_size": 0, 00:04:44.973 "io_path_stat": false, 00:04:44.973 "allow_accel_sequence": false, 00:04:44.973 "rdma_max_cq_size": 0, 00:04:44.973 "rdma_cm_event_timeout_ms": 0, 00:04:44.973 "dhchap_digests": [ 00:04:44.973 "sha256", 00:04:44.973 "sha384", 00:04:44.973 "sha512" 00:04:44.973 ], 00:04:44.973 "dhchap_dhgroups": [ 00:04:44.973 "null", 00:04:44.973 "ffdhe2048", 00:04:44.973 "ffdhe3072", 00:04:44.973 "ffdhe4096", 00:04:44.973 "ffdhe6144", 00:04:44.973 "ffdhe8192" 00:04:44.973 ] 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "bdev_nvme_set_hotplug", 00:04:44.973 "params": { 00:04:44.973 "period_us": 100000, 00:04:44.973 "enable": false 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "bdev_wait_for_examine" 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "scsi", 00:04:44.973 "config": null 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "scheduler", 00:04:44.973 "config": [ 00:04:44.973 { 00:04:44.973 "method": "framework_set_scheduler", 00:04:44.973 "params": { 00:04:44.973 "name": "static" 00:04:44.973 } 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "vhost_scsi", 00:04:44.973 "config": [] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "vhost_blk", 00:04:44.973 "config": [] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "ublk", 00:04:44.973 "config": [] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "nbd", 00:04:44.973 "config": [] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "nvmf", 00:04:44.973 "config": [ 00:04:44.973 { 00:04:44.973 "method": "nvmf_set_config", 00:04:44.973 "params": { 00:04:44.973 "discovery_filter": "match_any", 00:04:44.973 "admin_cmd_passthru": { 00:04:44.973 "identify_ctrlr": false 00:04:44.973 } 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "nvmf_set_max_subsystems", 00:04:44.973 "params": { 00:04:44.973 "max_subsystems": 1024 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "nvmf_set_crdt", 00:04:44.973 "params": { 00:04:44.973 "crdt1": 0, 00:04:44.973 "crdt2": 0, 00:04:44.973 "crdt3": 0 00:04:44.973 } 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "method": "nvmf_create_transport", 00:04:44.973 "params": { 00:04:44.973 "trtype": "TCP", 00:04:44.973 "max_queue_depth": 128, 00:04:44.973 "max_io_qpairs_per_ctrlr": 127, 00:04:44.973 "in_capsule_data_size": 4096, 00:04:44.973 "max_io_size": 131072, 00:04:44.973 "io_unit_size": 131072, 00:04:44.973 "max_aq_depth": 128, 00:04:44.973 "num_shared_buffers": 511, 00:04:44.973 "buf_cache_size": 4294967295, 00:04:44.973 "dif_insert_or_strip": false, 00:04:44.973 "zcopy": false, 00:04:44.973 "c2h_success": true, 00:04:44.973 "sock_priority": 0, 00:04:44.973 "abort_timeout_sec": 1, 00:04:44.973 "ack_timeout": 0, 00:04:44.973 "data_wr_pool_size": 0 00:04:44.973 } 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 }, 00:04:44.973 { 00:04:44.973 "subsystem": "iscsi", 00:04:44.973 "config": [ 00:04:44.973 { 00:04:44.973 "method": "iscsi_set_options", 00:04:44.973 "params": { 00:04:44.973 "node_base": "iqn.2016-06.io.spdk", 00:04:44.973 "max_sessions": 128, 00:04:44.973 "max_connections_per_session": 2, 00:04:44.973 "max_queue_depth": 64, 00:04:44.973 "default_time2wait": 2, 00:04:44.973 "default_time2retain": 20, 00:04:44.973 "first_burst_length": 8192, 00:04:44.973 "immediate_data": true, 00:04:44.973 "allow_duplicated_isid": false, 00:04:44.973 "error_recovery_level": 0, 00:04:44.973 "nop_timeout": 60, 00:04:44.973 "nop_in_interval": 30, 00:04:44.973 "disable_chap": false, 00:04:44.973 "require_chap": false, 00:04:44.973 "mutual_chap": false, 00:04:44.973 "chap_group": 0, 00:04:44.973 "max_large_datain_per_connection": 64, 00:04:44.973 "max_r2t_per_connection": 4, 00:04:44.973 "pdu_pool_size": 36864, 00:04:44.973 "immediate_data_pool_size": 16384, 00:04:44.973 "data_out_pool_size": 2048 00:04:44.973 } 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 } 00:04:44.973 ] 00:04:44.973 } 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1426128 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1426128 ']' 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1426128 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.973 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426128 00:04:45.233 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.233 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.233 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426128' 00:04:45.233 killing process with pid 1426128 00:04:45.233 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1426128 00:04:45.233 19:10:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1426128 00:04:45.491 19:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1426364 00:04:45.491 19:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:45.491 19:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1426364 ']' 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1426364' 00:04:50.764 killing process with pid 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1426364 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.764 00:04:50.764 real 0m6.201s 00:04:50.764 user 0m5.899s 00:04:50.764 sys 0m0.553s 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.764 ************************************ 00:04:50.764 END TEST skip_rpc_with_json 00:04:50.764 ************************************ 00:04:50.764 19:11:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.764 19:11:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.764 19:11:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.764 19:11:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.764 19:11:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.764 ************************************ 00:04:50.764 START TEST skip_rpc_with_delay 00:04:50.764 ************************************ 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.764 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.764 [2024-07-15 19:11:01.616276] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.764 [2024-07-15 19:11:01.616337] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.023 00:04:51.023 real 0m0.064s 00:04:51.023 user 0m0.040s 00:04:51.023 sys 0m0.024s 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.023 19:11:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 END TEST skip_rpc_with_delay 00:04:51.023 ************************************ 00:04:51.023 19:11:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.023 19:11:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.023 19:11:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.023 19:11:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.023 19:11:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.023 19:11:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.023 19:11:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 ************************************ 00:04:51.023 START TEST exit_on_failed_rpc_init 00:04:51.023 ************************************ 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1427335 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1427335 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1427335 ']' 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.023 19:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.023 [2024-07-15 19:11:01.723386] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:51.023 [2024-07-15 19:11:01.723426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427335 ] 00:04:51.023 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.023 [2024-07-15 19:11:01.748997] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.023 [2024-07-15 19:11:01.777613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.023 [2024-07-15 19:11:01.818936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.282 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.282 [2024-07-15 19:11:02.062745] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:51.282 [2024-07-15 19:11:02.062792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427351 ] 00:04:51.282 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.282 [2024-07-15 19:11:02.088801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.282 [2024-07-15 19:11:02.115071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.541 [2024-07-15 19:11:02.155420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.541 [2024-07-15 19:11:02.155482] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.541 [2024-07-15 19:11:02.155490] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.541 [2024-07-15 19:11:02.155496] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1427335 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1427335 ']' 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1427335 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427335 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427335' 00:04:51.541 killing process with pid 1427335 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1427335 00:04:51.541 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1427335 00:04:51.801 00:04:51.801 real 0m0.882s 00:04:51.801 user 0m0.935s 00:04:51.801 sys 0m0.368s 00:04:51.801 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.801 19:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.801 ************************************ 00:04:51.801 END TEST exit_on_failed_rpc_init 00:04:51.801 ************************************ 00:04:51.801 19:11:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.801 19:11:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.801 00:04:51.801 real 0m12.837s 00:04:51.801 user 0m12.150s 00:04:51.801 sys 0m1.417s 00:04:51.801 19:11:02 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.801 19:11:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.801 ************************************ 00:04:51.801 END TEST skip_rpc 00:04:51.801 ************************************ 00:04:51.801 19:11:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.801 19:11:02 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.801 19:11:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.801 19:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.801 19:11:02 -- common/autotest_common.sh@10 -- # set +x 00:04:51.801 ************************************ 00:04:51.801 START TEST rpc_client 00:04:51.801 ************************************ 00:04:51.801 19:11:02 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:52.062 * Looking for test storage... 00:04:52.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:52.062 19:11:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:52.062 OK 00:04:52.062 19:11:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.062 00:04:52.062 real 0m0.084s 00:04:52.062 user 0m0.034s 00:04:52.062 sys 0m0.052s 00:04:52.062 19:11:02 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.062 19:11:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.062 ************************************ 00:04:52.062 END TEST rpc_client 00:04:52.062 ************************************ 00:04:52.062 19:11:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.062 19:11:02 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.062 19:11:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.062 19:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.062 19:11:02 -- common/autotest_common.sh@10 -- # set +x 00:04:52.062 ************************************ 00:04:52.062 START TEST json_config 00:04:52.062 ************************************ 00:04:52.062 19:11:02 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:52.062 19:11:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.062 19:11:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.062 19:11:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.062 19:11:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.062 19:11:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.062 19:11:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.062 19:11:02 json_config -- paths/export.sh@5 -- # export PATH 00:04:52.062 19:11:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@47 -- # : 0 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:52.062 19:11:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:52.062 INFO: JSON configuration test init 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:52.062 19:11:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:52.062 19:11:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.063 19:11:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.063 19:11:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:52.063 19:11:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:52.063 19:11:02 json_config -- json_config/common.sh@10 -- # shift 00:04:52.063 19:11:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.063 19:11:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.063 19:11:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.063 19:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.063 19:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.063 19:11:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1427686 00:04:52.063 19:11:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.063 Waiting for target to run... 00:04:52.063 19:11:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:52.063 19:11:02 json_config -- json_config/common.sh@25 -- # waitforlisten 1427686 /var/tmp/spdk_tgt.sock 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 1427686 ']' 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.063 19:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.322 [2024-07-15 19:11:02.950718] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:04:52.322 [2024-07-15 19:11:02.950772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427686 ] 00:04:52.322 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.582 [2024-07-15 19:11:03.183609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:52.582 [2024-07-15 19:11:03.213414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.582 [2024-07-15 19:11:03.237853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:53.150 19:11:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.150 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.150 19:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:53.150 19:11:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:53.150 19:11:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:56.438 19:11:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.438 19:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:56.438 19:11:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:56.438 19:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:56.438 19:11:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.438 19:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:56.438 19:11:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.438 19:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:56.438 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:56.438 MallocForNvmf0 00:04:56.438 19:11:07 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:56.438 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:56.697 MallocForNvmf1 00:04:56.697 19:11:07 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.697 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.955 [2024-07-15 19:11:07.556651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.955 19:11:07 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.955 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.955 19:11:07 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.955 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:57.212 19:11:07 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:57.212 19:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:57.212 19:11:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:57.212 19:11:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:57.471 [2024-07-15 19:11:08.226723] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.471 19:11:08 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:57.471 19:11:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.471 19:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 19:11:08 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:57.471 19:11:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.471 19:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 19:11:08 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:57.471 19:11:08 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.471 19:11:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.731 MallocBdevForConfigChangeCheck 00:04:57.731 19:11:08 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:57.731 19:11:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.731 19:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.731 19:11:08 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:57.731 19:11:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.989 19:11:08 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:57.989 INFO: shutting down applications... 00:04:57.990 19:11:08 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:57.990 19:11:08 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:57.990 19:11:08 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:57.990 19:11:08 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.891 Calling clear_iscsi_subsystem 00:04:59.891 Calling clear_nvmf_subsystem 00:04:59.891 Calling clear_nbd_subsystem 00:04:59.891 Calling clear_ublk_subsystem 00:04:59.891 Calling clear_vhost_blk_subsystem 00:04:59.891 Calling clear_vhost_scsi_subsystem 00:04:59.891 Calling clear_bdev_subsystem 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@345 -- # break 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:59.891 19:11:10 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:59.891 19:11:10 json_config -- json_config/common.sh@31 -- # local app=target 00:04:59.891 19:11:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.891 19:11:10 json_config -- json_config/common.sh@35 -- # [[ -n 1427686 ]] 00:04:59.891 19:11:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1427686 00:04:59.891 19:11:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.891 19:11:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.891 19:11:10 json_config -- json_config/common.sh@41 -- # kill -0 1427686 00:04:59.891 19:11:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.459 19:11:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.459 19:11:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.459 19:11:11 json_config -- json_config/common.sh@41 -- # kill -0 1427686 00:05:00.459 19:11:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.459 19:11:11 json_config -- json_config/common.sh@43 -- # break 00:05:00.459 19:11:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.459 19:11:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.459 SPDK target shutdown done 00:05:00.459 19:11:11 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:00.459 INFO: relaunching applications... 00:05:00.459 19:11:11 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.459 19:11:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.459 19:11:11 json_config -- json_config/common.sh@10 -- # shift 00:05:00.459 19:11:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.459 19:11:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.459 19:11:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.459 19:11:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.459 19:11:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.459 19:11:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1429191 00:05:00.459 19:11:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.459 Waiting for target to run... 00:05:00.459 19:11:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.459 19:11:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1429191 /var/tmp/spdk_tgt.sock 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@829 -- # '[' -z 1429191 ']' 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.459 19:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.459 [2024-07-15 19:11:11.238985] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:00.459 [2024-07-15 19:11:11.239036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429191 ] 00:05:00.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.027 [2024-07-15 19:11:11.644705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:01.027 [2024-07-15 19:11:11.675033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.027 [2024-07-15 19:11:11.708459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.318 [2024-07-15 19:11:14.703036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.318 [2024-07-15 19:11:14.735367] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.577 19:11:15 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.577 19:11:15 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:04.577 19:11:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.577 00:05:04.577 19:11:15 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:04.577 19:11:15 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.577 INFO: Checking if target configuration is the same... 00:05:04.577 19:11:15 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.577 19:11:15 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:04.577 19:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.577 + '[' 2 -ne 2 ']' 00:05:04.577 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.577 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.577 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.577 +++ basename /dev/fd/62 00:05:04.577 ++ mktemp /tmp/62.XXX 00:05:04.577 + tmp_file_1=/tmp/62.kX4 00:05:04.577 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.577 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.577 + tmp_file_2=/tmp/spdk_tgt_config.json.uwi 00:05:04.577 + ret=0 00:05:04.577 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.146 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.146 + diff -u /tmp/62.kX4 /tmp/spdk_tgt_config.json.uwi 00:05:05.146 + echo 'INFO: JSON config files are the same' 00:05:05.146 INFO: JSON config files are the same 00:05:05.146 + rm /tmp/62.kX4 /tmp/spdk_tgt_config.json.uwi 00:05:05.146 + exit 0 00:05:05.146 19:11:15 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:05.146 19:11:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.146 INFO: changing configuration and checking if this can be detected... 00:05:05.146 19:11:15 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.146 19:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.146 19:11:15 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.146 19:11:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:05.146 19:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.146 + '[' 2 -ne 2 ']' 00:05:05.146 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.146 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.146 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.146 +++ basename /dev/fd/62 00:05:05.146 ++ mktemp /tmp/62.XXX 00:05:05.146 + tmp_file_1=/tmp/62.o9K 00:05:05.146 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.146 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.146 + tmp_file_2=/tmp/spdk_tgt_config.json.bDA 00:05:05.146 + ret=0 00:05:05.146 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.405 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.665 + diff -u /tmp/62.o9K /tmp/spdk_tgt_config.json.bDA 00:05:05.665 + ret=1 00:05:05.665 + echo '=== Start of file: /tmp/62.o9K ===' 00:05:05.665 + cat /tmp/62.o9K 00:05:05.665 + echo '=== End of file: /tmp/62.o9K ===' 00:05:05.665 + echo '' 00:05:05.665 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bDA ===' 00:05:05.665 + cat /tmp/spdk_tgt_config.json.bDA 00:05:05.665 + echo '=== End of file: /tmp/spdk_tgt_config.json.bDA ===' 00:05:05.665 + echo '' 00:05:05.665 + rm /tmp/62.o9K /tmp/spdk_tgt_config.json.bDA 00:05:05.665 + exit 1 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:05.665 INFO: configuration change detected. 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@317 -- # [[ -n 1429191 ]] 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.665 19:11:16 json_config -- json_config/json_config.sh@323 -- # killprocess 1429191 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@948 -- # '[' -z 1429191 ']' 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@952 -- # kill -0 1429191 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@953 -- # uname 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429191 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429191' 00:05:05.665 killing process with pid 1429191 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@967 -- # kill 1429191 00:05:05.665 19:11:16 json_config -- common/autotest_common.sh@972 -- # wait 1429191 00:05:07.044 19:11:17 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.044 19:11:17 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:07.044 19:11:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.044 19:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.304 19:11:17 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:07.304 19:11:17 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:07.304 INFO: Success 00:05:07.304 00:05:07.304 real 0m15.116s 00:05:07.304 user 0m15.842s 00:05:07.304 sys 0m1.861s 00:05:07.304 19:11:17 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.304 19:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.304 ************************************ 00:05:07.304 END TEST json_config 00:05:07.304 ************************************ 00:05:07.304 19:11:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.304 19:11:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.304 19:11:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.304 19:11:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.304 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:07.304 ************************************ 00:05:07.304 START TEST json_config_extra_key 00:05:07.304 ************************************ 00:05:07.304 19:11:17 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.304 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.304 19:11:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.304 19:11:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.304 19:11:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.304 19:11:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.304 19:11:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.305 19:11:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.305 19:11:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.305 19:11:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:07.305 19:11:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.305 19:11:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:07.305 INFO: launching applications... 00:05:07.305 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1430457 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.305 Waiting for target to run... 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1430457 /var/tmp/spdk_tgt.sock 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1430457 ']' 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.305 19:11:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.305 19:11:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.305 [2024-07-15 19:11:18.096453] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:07.305 [2024-07-15 19:11:18.096501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430457 ] 00:05:07.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.565 [2024-07-15 19:11:18.328644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:07.565 [2024-07-15 19:11:18.357765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.565 [2024-07-15 19:11:18.381674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.134 19:11:18 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.134 19:11:18 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.134 00:05:08.134 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.134 INFO: shutting down applications... 00:05:08.134 19:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1430457 ]] 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1430457 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1430457 00:05:08.134 19:11:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1430457 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.702 19:11:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.702 SPDK target shutdown done 00:05:08.702 19:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.702 Success 00:05:08.702 00:05:08.702 real 0m1.418s 00:05:08.702 user 0m1.199s 00:05:08.702 sys 0m0.358s 00:05:08.702 19:11:19 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.702 19:11:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.702 ************************************ 00:05:08.703 END TEST json_config_extra_key 00:05:08.703 ************************************ 00:05:08.703 19:11:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.703 19:11:19 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.703 19:11:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.703 19:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.703 19:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.703 ************************************ 00:05:08.703 START TEST alias_rpc 00:05:08.703 ************************************ 00:05:08.703 19:11:19 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.703 * Looking for test storage... 00:05:08.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.962 19:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.962 19:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1430738 00:05:08.962 19:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1430738 00:05:08.962 19:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.962 19:11:19 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1430738 ']' 00:05:08.962 19:11:19 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.962 19:11:19 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.963 19:11:19 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.963 19:11:19 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.963 19:11:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.963 [2024-07-15 19:11:19.610785] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:08.963 [2024-07-15 19:11:19.610829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430738 ] 00:05:08.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.963 [2024-07-15 19:11:19.637370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.963 [2024-07-15 19:11:19.665884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.963 [2024-07-15 19:11:19.707312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.222 19:11:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.222 19:11:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.222 19:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:09.481 19:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1430738 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1430738 ']' 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1430738 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1430738 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1430738' 00:05:09.481 killing process with pid 1430738 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@967 -- # kill 1430738 00:05:09.481 19:11:20 alias_rpc -- common/autotest_common.sh@972 -- # wait 1430738 00:05:09.741 00:05:09.741 real 0m0.980s 00:05:09.741 user 0m1.000s 00:05:09.741 sys 0m0.381s 00:05:09.741 19:11:20 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.741 19:11:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.741 ************************************ 00:05:09.741 END TEST alias_rpc 00:05:09.741 ************************************ 00:05:09.741 19:11:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.741 19:11:20 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:09.741 19:11:20 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.741 19:11:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.741 19:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.741 19:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.741 ************************************ 00:05:09.741 START TEST spdkcli_tcp 00:05:09.741 ************************************ 00:05:09.741 19:11:20 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.741 * Looking for test storage... 00:05:09.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.741 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.741 19:11:20 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.741 19:11:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.000 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.000 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1431019 00:05:10.000 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1431019 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1431019 ']' 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.000 19:11:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.000 [2024-07-15 19:11:20.628558] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:10.000 [2024-07-15 19:11:20.628606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431019 ] 00:05:10.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.000 [2024-07-15 19:11:20.654288] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.000 [2024-07-15 19:11:20.681465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.000 [2024-07-15 19:11:20.722876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.000 [2024-07-15 19:11:20.722878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.260 19:11:20 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.260 19:11:20 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:10.260 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1431023 00:05:10.260 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.260 19:11:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.260 [ 00:05:10.260 "bdev_malloc_delete", 00:05:10.260 "bdev_malloc_create", 00:05:10.260 "bdev_null_resize", 00:05:10.260 "bdev_null_delete", 00:05:10.260 "bdev_null_create", 00:05:10.260 "bdev_nvme_cuse_unregister", 00:05:10.260 "bdev_nvme_cuse_register", 00:05:10.260 "bdev_opal_new_user", 00:05:10.260 "bdev_opal_set_lock_state", 00:05:10.260 "bdev_opal_delete", 00:05:10.260 "bdev_opal_get_info", 00:05:10.260 "bdev_opal_create", 00:05:10.260 "bdev_nvme_opal_revert", 00:05:10.260 "bdev_nvme_opal_init", 00:05:10.260 "bdev_nvme_send_cmd", 00:05:10.260 "bdev_nvme_get_path_iostat", 00:05:10.260 "bdev_nvme_get_mdns_discovery_info", 00:05:10.260 "bdev_nvme_stop_mdns_discovery", 00:05:10.260 "bdev_nvme_start_mdns_discovery", 00:05:10.260 "bdev_nvme_set_multipath_policy", 00:05:10.260 "bdev_nvme_set_preferred_path", 00:05:10.260 "bdev_nvme_get_io_paths", 00:05:10.260 "bdev_nvme_remove_error_injection", 00:05:10.260 "bdev_nvme_add_error_injection", 00:05:10.260 "bdev_nvme_get_discovery_info", 00:05:10.260 "bdev_nvme_stop_discovery", 00:05:10.260 "bdev_nvme_start_discovery", 00:05:10.260 "bdev_nvme_get_controller_health_info", 00:05:10.260 "bdev_nvme_disable_controller", 00:05:10.260 "bdev_nvme_enable_controller", 00:05:10.260 "bdev_nvme_reset_controller", 00:05:10.260 "bdev_nvme_get_transport_statistics", 00:05:10.260 "bdev_nvme_apply_firmware", 00:05:10.260 "bdev_nvme_detach_controller", 00:05:10.260 "bdev_nvme_get_controllers", 00:05:10.260 "bdev_nvme_attach_controller", 00:05:10.260 "bdev_nvme_set_hotplug", 00:05:10.260 "bdev_nvme_set_options", 00:05:10.260 "bdev_passthru_delete", 00:05:10.260 "bdev_passthru_create", 00:05:10.260 "bdev_lvol_set_parent_bdev", 00:05:10.260 "bdev_lvol_set_parent", 00:05:10.260 "bdev_lvol_check_shallow_copy", 00:05:10.260 "bdev_lvol_start_shallow_copy", 00:05:10.260 "bdev_lvol_grow_lvstore", 00:05:10.260 "bdev_lvol_get_lvols", 00:05:10.260 "bdev_lvol_get_lvstores", 00:05:10.260 "bdev_lvol_delete", 00:05:10.260 "bdev_lvol_set_read_only", 00:05:10.260 "bdev_lvol_resize", 00:05:10.260 "bdev_lvol_decouple_parent", 00:05:10.260 "bdev_lvol_inflate", 00:05:10.260 "bdev_lvol_rename", 00:05:10.260 "bdev_lvol_clone_bdev", 00:05:10.260 "bdev_lvol_clone", 00:05:10.260 "bdev_lvol_snapshot", 00:05:10.260 "bdev_lvol_create", 00:05:10.260 "bdev_lvol_delete_lvstore", 00:05:10.260 "bdev_lvol_rename_lvstore", 00:05:10.260 "bdev_lvol_create_lvstore", 00:05:10.260 "bdev_raid_set_options", 00:05:10.260 "bdev_raid_remove_base_bdev", 00:05:10.260 "bdev_raid_add_base_bdev", 00:05:10.260 "bdev_raid_delete", 00:05:10.260 "bdev_raid_create", 00:05:10.260 "bdev_raid_get_bdevs", 00:05:10.260 "bdev_error_inject_error", 00:05:10.260 "bdev_error_delete", 00:05:10.260 "bdev_error_create", 00:05:10.260 "bdev_split_delete", 00:05:10.260 "bdev_split_create", 00:05:10.260 "bdev_delay_delete", 00:05:10.260 "bdev_delay_create", 00:05:10.260 "bdev_delay_update_latency", 00:05:10.260 "bdev_zone_block_delete", 00:05:10.260 "bdev_zone_block_create", 00:05:10.260 "blobfs_create", 00:05:10.260 "blobfs_detect", 00:05:10.260 "blobfs_set_cache_size", 00:05:10.260 "bdev_aio_delete", 00:05:10.260 "bdev_aio_rescan", 00:05:10.260 "bdev_aio_create", 00:05:10.260 "bdev_ftl_set_property", 00:05:10.260 "bdev_ftl_get_properties", 00:05:10.260 "bdev_ftl_get_stats", 00:05:10.260 "bdev_ftl_unmap", 00:05:10.260 "bdev_ftl_unload", 00:05:10.260 "bdev_ftl_delete", 00:05:10.260 "bdev_ftl_load", 00:05:10.260 "bdev_ftl_create", 00:05:10.260 "bdev_virtio_attach_controller", 00:05:10.260 "bdev_virtio_scsi_get_devices", 00:05:10.260 "bdev_virtio_detach_controller", 00:05:10.260 "bdev_virtio_blk_set_hotplug", 00:05:10.260 "bdev_iscsi_delete", 00:05:10.260 "bdev_iscsi_create", 00:05:10.260 "bdev_iscsi_set_options", 00:05:10.260 "accel_error_inject_error", 00:05:10.260 "ioat_scan_accel_module", 00:05:10.260 "dsa_scan_accel_module", 00:05:10.260 "iaa_scan_accel_module", 00:05:10.260 "vfu_virtio_create_scsi_endpoint", 00:05:10.260 "vfu_virtio_scsi_remove_target", 00:05:10.260 "vfu_virtio_scsi_add_target", 00:05:10.260 "vfu_virtio_create_blk_endpoint", 00:05:10.260 "vfu_virtio_delete_endpoint", 00:05:10.260 "keyring_file_remove_key", 00:05:10.260 "keyring_file_add_key", 00:05:10.260 "keyring_linux_set_options", 00:05:10.260 "iscsi_get_histogram", 00:05:10.260 "iscsi_enable_histogram", 00:05:10.260 "iscsi_set_options", 00:05:10.260 "iscsi_get_auth_groups", 00:05:10.260 "iscsi_auth_group_remove_secret", 00:05:10.260 "iscsi_auth_group_add_secret", 00:05:10.260 "iscsi_delete_auth_group", 00:05:10.260 "iscsi_create_auth_group", 00:05:10.260 "iscsi_set_discovery_auth", 00:05:10.260 "iscsi_get_options", 00:05:10.260 "iscsi_target_node_request_logout", 00:05:10.260 "iscsi_target_node_set_redirect", 00:05:10.260 "iscsi_target_node_set_auth", 00:05:10.260 "iscsi_target_node_add_lun", 00:05:10.260 "iscsi_get_stats", 00:05:10.260 "iscsi_get_connections", 00:05:10.260 "iscsi_portal_group_set_auth", 00:05:10.260 "iscsi_start_portal_group", 00:05:10.260 "iscsi_delete_portal_group", 00:05:10.260 "iscsi_create_portal_group", 00:05:10.260 "iscsi_get_portal_groups", 00:05:10.260 "iscsi_delete_target_node", 00:05:10.260 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.260 "iscsi_target_node_add_pg_ig_maps", 00:05:10.260 "iscsi_create_target_node", 00:05:10.260 "iscsi_get_target_nodes", 00:05:10.260 "iscsi_delete_initiator_group", 00:05:10.260 "iscsi_initiator_group_remove_initiators", 00:05:10.260 "iscsi_initiator_group_add_initiators", 00:05:10.260 "iscsi_create_initiator_group", 00:05:10.260 "iscsi_get_initiator_groups", 00:05:10.260 "nvmf_set_crdt", 00:05:10.260 "nvmf_set_config", 00:05:10.260 "nvmf_set_max_subsystems", 00:05:10.260 "nvmf_stop_mdns_prr", 00:05:10.260 "nvmf_publish_mdns_prr", 00:05:10.260 "nvmf_subsystem_get_listeners", 00:05:10.260 "nvmf_subsystem_get_qpairs", 00:05:10.260 "nvmf_subsystem_get_controllers", 00:05:10.260 "nvmf_get_stats", 00:05:10.260 "nvmf_get_transports", 00:05:10.260 "nvmf_create_transport", 00:05:10.260 "nvmf_get_targets", 00:05:10.260 "nvmf_delete_target", 00:05:10.260 "nvmf_create_target", 00:05:10.260 "nvmf_subsystem_allow_any_host", 00:05:10.260 "nvmf_subsystem_remove_host", 00:05:10.260 "nvmf_subsystem_add_host", 00:05:10.260 "nvmf_ns_remove_host", 00:05:10.260 "nvmf_ns_add_host", 00:05:10.260 "nvmf_subsystem_remove_ns", 00:05:10.260 "nvmf_subsystem_add_ns", 00:05:10.260 "nvmf_subsystem_listener_set_ana_state", 00:05:10.260 "nvmf_discovery_get_referrals", 00:05:10.260 "nvmf_discovery_remove_referral", 00:05:10.260 "nvmf_discovery_add_referral", 00:05:10.260 "nvmf_subsystem_remove_listener", 00:05:10.260 "nvmf_subsystem_add_listener", 00:05:10.260 "nvmf_delete_subsystem", 00:05:10.260 "nvmf_create_subsystem", 00:05:10.260 "nvmf_get_subsystems", 00:05:10.260 "env_dpdk_get_mem_stats", 00:05:10.260 "nbd_get_disks", 00:05:10.260 "nbd_stop_disk", 00:05:10.260 "nbd_start_disk", 00:05:10.260 "ublk_recover_disk", 00:05:10.260 "ublk_get_disks", 00:05:10.260 "ublk_stop_disk", 00:05:10.260 "ublk_start_disk", 00:05:10.260 "ublk_destroy_target", 00:05:10.260 "ublk_create_target", 00:05:10.260 "virtio_blk_create_transport", 00:05:10.260 "virtio_blk_get_transports", 00:05:10.261 "vhost_controller_set_coalescing", 00:05:10.261 "vhost_get_controllers", 00:05:10.261 "vhost_delete_controller", 00:05:10.261 "vhost_create_blk_controller", 00:05:10.261 "vhost_scsi_controller_remove_target", 00:05:10.261 "vhost_scsi_controller_add_target", 00:05:10.261 "vhost_start_scsi_controller", 00:05:10.261 "vhost_create_scsi_controller", 00:05:10.261 "thread_set_cpumask", 00:05:10.261 "framework_get_governor", 00:05:10.261 "framework_get_scheduler", 00:05:10.261 "framework_set_scheduler", 00:05:10.261 "framework_get_reactors", 00:05:10.261 "thread_get_io_channels", 00:05:10.261 "thread_get_pollers", 00:05:10.261 "thread_get_stats", 00:05:10.261 "framework_monitor_context_switch", 00:05:10.261 "spdk_kill_instance", 00:05:10.261 "log_enable_timestamps", 00:05:10.261 "log_get_flags", 00:05:10.261 "log_clear_flag", 00:05:10.261 "log_set_flag", 00:05:10.261 "log_get_level", 00:05:10.261 "log_set_level", 00:05:10.261 "log_get_print_level", 00:05:10.261 "log_set_print_level", 00:05:10.261 "framework_enable_cpumask_locks", 00:05:10.261 "framework_disable_cpumask_locks", 00:05:10.261 "framework_wait_init", 00:05:10.261 "framework_start_init", 00:05:10.261 "scsi_get_devices", 00:05:10.261 "bdev_get_histogram", 00:05:10.261 "bdev_enable_histogram", 00:05:10.261 "bdev_set_qos_limit", 00:05:10.261 "bdev_set_qd_sampling_period", 00:05:10.261 "bdev_get_bdevs", 00:05:10.261 "bdev_reset_iostat", 00:05:10.261 "bdev_get_iostat", 00:05:10.261 "bdev_examine", 00:05:10.261 "bdev_wait_for_examine", 00:05:10.261 "bdev_set_options", 00:05:10.261 "notify_get_notifications", 00:05:10.261 "notify_get_types", 00:05:10.261 "accel_get_stats", 00:05:10.261 "accel_set_options", 00:05:10.261 "accel_set_driver", 00:05:10.261 "accel_crypto_key_destroy", 00:05:10.261 "accel_crypto_keys_get", 00:05:10.261 "accel_crypto_key_create", 00:05:10.261 "accel_assign_opc", 00:05:10.261 "accel_get_module_info", 00:05:10.261 "accel_get_opc_assignments", 00:05:10.261 "vmd_rescan", 00:05:10.261 "vmd_remove_device", 00:05:10.261 "vmd_enable", 00:05:10.261 "sock_get_default_impl", 00:05:10.261 "sock_set_default_impl", 00:05:10.261 "sock_impl_set_options", 00:05:10.261 "sock_impl_get_options", 00:05:10.261 "iobuf_get_stats", 00:05:10.261 "iobuf_set_options", 00:05:10.261 "keyring_get_keys", 00:05:10.261 "framework_get_pci_devices", 00:05:10.261 "framework_get_config", 00:05:10.261 "framework_get_subsystems", 00:05:10.261 "vfu_tgt_set_base_path", 00:05:10.261 "trace_get_info", 00:05:10.261 "trace_get_tpoint_group_mask", 00:05:10.261 "trace_disable_tpoint_group", 00:05:10.261 "trace_enable_tpoint_group", 00:05:10.261 "trace_clear_tpoint_mask", 00:05:10.261 "trace_set_tpoint_mask", 00:05:10.261 "spdk_get_version", 00:05:10.261 "rpc_get_methods" 00:05:10.261 ] 00:05:10.261 19:11:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.261 19:11:21 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.261 19:11:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.520 19:11:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.520 19:11:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1431019 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1431019 ']' 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1431019 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1431019 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1431019' 00:05:10.520 killing process with pid 1431019 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1431019 00:05:10.520 19:11:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1431019 00:05:10.781 00:05:10.781 real 0m0.959s 00:05:10.781 user 0m1.675s 00:05:10.781 sys 0m0.378s 00:05:10.781 19:11:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.781 19:11:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.781 ************************************ 00:05:10.781 END TEST spdkcli_tcp 00:05:10.781 ************************************ 00:05:10.781 19:11:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.781 19:11:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.781 19:11:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.781 19:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.781 19:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.781 ************************************ 00:05:10.781 START TEST dpdk_mem_utility 00:05:10.781 ************************************ 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.781 * Looking for test storage... 00:05:10.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:10.781 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:10.781 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1431128 00:05:10.781 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1431128 00:05:10.781 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1431128 ']' 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.781 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.041 [2024-07-15 19:11:21.672623] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:11.041 [2024-07-15 19:11:21.672672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431128 ] 00:05:11.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.041 [2024-07-15 19:11:21.699045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:11.041 [2024-07-15 19:11:21.727616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.041 [2024-07-15 19:11:21.768082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.301 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.301 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:11.301 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.301 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.301 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.301 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.301 { 00:05:11.301 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.301 } 00:05:11.301 19:11:21 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.301 19:11:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.301 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.301 1 heaps totaling size 814.000000 MiB 00:05:11.301 size: 814.000000 MiB heap id: 0 00:05:11.301 end heaps---------- 00:05:11.301 8 mempools totaling size 598.116089 MiB 00:05:11.301 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.301 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.301 size: 84.521057 MiB name: bdev_io_1431128 00:05:11.301 size: 51.011292 MiB name: evtpool_1431128 00:05:11.301 size: 50.003479 MiB name: msgpool_1431128 00:05:11.301 size: 21.763794 MiB name: PDU_Pool 00:05:11.301 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.301 size: 0.026123 MiB name: Session_Pool 00:05:11.301 end mempools------- 00:05:11.301 6 memzones totaling size 4.142822 MiB 00:05:11.301 size: 1.000366 MiB name: RG_ring_0_1431128 00:05:11.301 size: 1.000366 MiB name: RG_ring_1_1431128 00:05:11.301 size: 1.000366 MiB name: RG_ring_4_1431128 00:05:11.301 size: 1.000366 MiB name: RG_ring_5_1431128 00:05:11.301 size: 0.125366 MiB name: RG_ring_2_1431128 00:05:11.301 size: 0.015991 MiB name: RG_ring_3_1431128 00:05:11.301 end memzones------- 00:05:11.301 19:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.301 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:11.301 list of free elements. size: 12.519348 MiB 00:05:11.301 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.301 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.301 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.301 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.301 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.301 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.301 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.301 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.301 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:11.301 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:11.301 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:11.301 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:11.301 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.301 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:11.301 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:11.301 list of standard malloc elements. size: 199.218079 MiB 00:05:11.301 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.301 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.301 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.301 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.301 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.301 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.301 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.301 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.301 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.301 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.301 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.301 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.301 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.301 list of memzone associated elements. size: 602.262573 MiB 00:05:11.301 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.301 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.301 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.301 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.301 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.301 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1431128_0 00:05:11.301 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.301 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1431128_0 00:05:11.301 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.301 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1431128_0 00:05:11.301 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.301 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.301 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.301 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.301 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.301 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1431128 00:05:11.301 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.301 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1431128 00:05:11.301 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.301 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1431128 00:05:11.301 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.301 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.301 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.301 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.301 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.301 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.301 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.301 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1431128 00:05:11.301 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.301 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1431128 00:05:11.301 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.301 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1431128 00:05:11.301 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.301 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1431128 00:05:11.301 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:11.301 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1431128 00:05:11.301 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.302 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.302 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.302 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.302 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.302 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.302 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:11.302 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1431128 00:05:11.302 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.302 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.302 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:11.302 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.302 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:11.302 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1431128 00:05:11.302 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:11.302 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.302 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:11.302 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1431128 00:05:11.302 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:11.302 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1431128 00:05:11.302 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:11.302 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.302 19:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.302 19:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1431128 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1431128 ']' 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1431128 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1431128 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1431128' 00:05:11.302 killing process with pid 1431128 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1431128 00:05:11.302 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1431128 00:05:11.560 00:05:11.560 real 0m0.879s 00:05:11.560 user 0m0.827s 00:05:11.560 sys 0m0.372s 00:05:11.560 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.560 19:11:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.560 ************************************ 00:05:11.560 END TEST dpdk_mem_utility 00:05:11.560 ************************************ 00:05:11.819 19:11:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.819 19:11:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.819 19:11:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.819 19:11:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.819 19:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 START TEST event 00:05:11.819 ************************************ 00:05:11.819 19:11:22 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.819 * Looking for test storage... 00:05:11.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.819 19:11:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:11.819 19:11:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:11.819 19:11:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.819 19:11:22 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:11.819 19:11:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.819 19:11:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 START TEST event_perf 00:05:11.819 ************************************ 00:05:11.819 19:11:22 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.819 Running I/O for 1 seconds...[2024-07-15 19:11:22.604303] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:11.819 [2024-07-15 19:11:22.604369] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431379 ] 00:05:11.819 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.819 [2024-07-15 19:11:22.633895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:11.819 [2024-07-15 19:11:22.662745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.077 [2024-07-15 19:11:22.706289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.077 [2024-07-15 19:11:22.706386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.077 [2024-07-15 19:11:22.706477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.077 [2024-07-15 19:11:22.706478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.014 Running I/O for 1 seconds... 00:05:13.014 lcore 0: 205188 00:05:13.014 lcore 1: 205188 00:05:13.014 lcore 2: 205188 00:05:13.014 lcore 3: 205188 00:05:13.014 done. 00:05:13.014 00:05:13.014 real 0m1.183s 00:05:13.014 user 0m4.104s 00:05:13.014 sys 0m0.076s 00:05:13.014 19:11:23 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.014 19:11:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.014 ************************************ 00:05:13.014 END TEST event_perf 00:05:13.014 ************************************ 00:05:13.014 19:11:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:13.014 19:11:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.014 19:11:23 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:13.014 19:11:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.014 19:11:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.014 ************************************ 00:05:13.014 START TEST event_reactor 00:05:13.014 ************************************ 00:05:13.014 19:11:23 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.014 [2024-07-15 19:11:23.858859] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:13.014 [2024-07-15 19:11:23.858926] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431631 ] 00:05:13.333 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.333 [2024-07-15 19:11:23.888844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:13.333 [2024-07-15 19:11:23.917287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.333 [2024-07-15 19:11:23.954825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.270 test_start 00:05:14.270 oneshot 00:05:14.270 tick 100 00:05:14.270 tick 100 00:05:14.270 tick 250 00:05:14.270 tick 100 00:05:14.270 tick 100 00:05:14.270 tick 250 00:05:14.270 tick 100 00:05:14.270 tick 500 00:05:14.270 tick 100 00:05:14.270 tick 100 00:05:14.270 tick 250 00:05:14.270 tick 100 00:05:14.270 tick 100 00:05:14.270 test_end 00:05:14.270 00:05:14.270 real 0m1.175s 00:05:14.270 user 0m1.098s 00:05:14.270 sys 0m0.073s 00:05:14.270 19:11:25 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.270 19:11:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:14.270 ************************************ 00:05:14.270 END TEST event_reactor 00:05:14.270 ************************************ 00:05:14.270 19:11:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:14.270 19:11:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.270 19:11:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:14.270 19:11:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.270 19:11:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.270 ************************************ 00:05:14.270 START TEST event_reactor_perf 00:05:14.270 ************************************ 00:05:14.270 19:11:25 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.270 [2024-07-15 19:11:25.104240] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:14.270 [2024-07-15 19:11:25.104310] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431877 ] 00:05:14.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.528 [2024-07-15 19:11:25.135136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:14.528 [2024-07-15 19:11:25.163450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.528 [2024-07-15 19:11:25.203857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.466 test_start 00:05:15.466 test_end 00:05:15.466 Performance: 498887 events per second 00:05:15.466 00:05:15.466 real 0m1.182s 00:05:15.466 user 0m1.101s 00:05:15.466 sys 0m0.076s 00:05:15.466 19:11:26 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.466 19:11:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.466 ************************************ 00:05:15.466 END TEST event_reactor_perf 00:05:15.466 ************************************ 00:05:15.466 19:11:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:15.466 19:11:26 event -- event/event.sh@49 -- # uname -s 00:05:15.466 19:11:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.466 19:11:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.466 19:11:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.466 19:11:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.466 19:11:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.725 ************************************ 00:05:15.725 START TEST event_scheduler 00:05:15.725 ************************************ 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:15.726 * Looking for test storage... 00:05:15.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:15.726 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:15.726 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1432161 00:05:15.726 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.726 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:15.726 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1432161 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1432161 ']' 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.726 19:11:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.726 [2024-07-15 19:11:26.476221] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:15.726 [2024-07-15 19:11:26.476274] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432161 ] 00:05:15.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.726 [2024-07-15 19:11:26.502864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:15.726 [2024-07-15 19:11:26.528047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.726 [2024-07-15 19:11:26.570742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.726 [2024-07-15 19:11:26.570828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.726 [2024-07-15 19:11:26.570916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.726 [2024-07-15 19:11:26.570918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:15.986 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 [2024-07-15 19:11:26.627477] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:15.986 [2024-07-15 19:11:26.627494] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:15.986 [2024-07-15 19:11:26.627503] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:15.986 [2024-07-15 19:11:26.627508] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:15.986 [2024-07-15 19:11:26.627513] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 [2024-07-15 19:11:26.694261] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 ************************************ 00:05:15.986 START TEST scheduler_create_thread 00:05:15.986 ************************************ 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 2 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 3 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 4 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 5 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 6 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 7 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 8 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 9 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 10 00:05:15.986 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.987 19:11:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.555 19:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.555 19:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:16.555 19:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.555 19:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.014 19:11:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.014 19:11:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.014 19:11:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.014 19:11:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.014 19:11:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 19:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.394 00:05:19.394 real 0m3.099s 00:05:19.394 user 0m0.025s 00:05:19.394 sys 0m0.004s 00:05:19.394 19:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.394 19:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.394 ************************************ 00:05:19.394 END TEST scheduler_create_thread 00:05:19.394 ************************************ 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:19.394 19:11:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.394 19:11:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1432161 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1432161 ']' 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1432161 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1432161 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1432161' 00:05:19.394 killing process with pid 1432161 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1432161 00:05:19.394 19:11:29 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1432161 00:05:19.394 [2024-07-15 19:11:30.209444] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:19.654 00:05:19.654 real 0m4.074s 00:05:19.654 user 0m6.572s 00:05:19.654 sys 0m0.342s 00:05:19.654 19:11:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.654 19:11:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 ************************************ 00:05:19.654 END TEST event_scheduler 00:05:19.654 ************************************ 00:05:19.654 19:11:30 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.654 19:11:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:19.654 19:11:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:19.654 19:11:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.654 19:11:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.654 19:11:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.654 ************************************ 00:05:19.654 START TEST app_repeat 00:05:19.654 ************************************ 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1432901 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1432901' 00:05:19.654 Process app_repeat pid: 1432901 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:19.654 spdk_app_start Round 0 00:05:19.654 19:11:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1432901 /var/tmp/spdk-nbd.sock 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1432901 ']' 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.654 19:11:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.914 [2024-07-15 19:11:30.522376] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:19.914 [2024-07-15 19:11:30.522432] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432901 ] 00:05:19.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.914 [2024-07-15 19:11:30.550127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.914 [2024-07-15 19:11:30.577958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.914 [2024-07-15 19:11:30.619306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.914 [2024-07-15 19:11:30.619308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.914 19:11:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.914 19:11:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.914 19:11:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.173 Malloc0 00:05:20.173 19:11:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.433 Malloc1 00:05:20.433 19:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.433 /dev/nbd0 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.433 1+0 records in 00:05:20.433 1+0 records out 00:05:20.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146521 s, 28.0 MB/s 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:20.433 19:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.433 19:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.692 /dev/nbd1 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.692 1+0 records in 00:05:20.692 1+0 records out 00:05:20.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197385 s, 20.8 MB/s 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:20.692 19:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.692 19:11:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.952 { 00:05:20.952 "nbd_device": "/dev/nbd0", 00:05:20.952 "bdev_name": "Malloc0" 00:05:20.952 }, 00:05:20.952 { 00:05:20.952 "nbd_device": "/dev/nbd1", 00:05:20.952 "bdev_name": "Malloc1" 00:05:20.952 } 00:05:20.952 ]' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.952 { 00:05:20.952 "nbd_device": "/dev/nbd0", 00:05:20.952 "bdev_name": "Malloc0" 00:05:20.952 }, 00:05:20.952 { 00:05:20.952 "nbd_device": "/dev/nbd1", 00:05:20.952 "bdev_name": "Malloc1" 00:05:20.952 } 00:05:20.952 ]' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.952 /dev/nbd1' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.952 /dev/nbd1' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.952 256+0 records in 00:05:20.952 256+0 records out 00:05:20.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103508 s, 101 MB/s 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.952 256+0 records in 00:05:20.952 256+0 records out 00:05:20.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143307 s, 73.2 MB/s 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.952 256+0 records in 00:05:20.952 256+0 records out 00:05:20.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149808 s, 70.0 MB/s 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.952 19:11:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.212 19:11:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.482 19:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.744 19:11:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.744 19:11:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.744 19:11:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.003 [2024-07-15 19:11:32.731779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.003 [2024-07-15 19:11:32.768497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.003 [2024-07-15 19:11:32.768497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.003 [2024-07-15 19:11:32.809459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.003 [2024-07-15 19:11:32.809500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.294 19:11:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.294 19:11:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:25.294 spdk_app_start Round 1 00:05:25.294 19:11:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1432901 /var/tmp/spdk-nbd.sock 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1432901 ']' 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.294 19:11:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:25.294 19:11:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.294 Malloc0 00:05:25.294 19:11:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.294 Malloc1 00:05:25.294 19:11:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.294 19:11:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.554 /dev/nbd0 00:05:25.554 19:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.554 19:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.554 1+0 records in 00:05:25.554 1+0 records out 00:05:25.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219287 s, 18.7 MB/s 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.554 19:11:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.554 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.554 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.554 19:11:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.813 /dev/nbd1 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.813 1+0 records in 00:05:25.813 1+0 records out 00:05:25.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021049 s, 19.5 MB/s 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.813 19:11:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.813 { 00:05:25.813 "nbd_device": "/dev/nbd0", 00:05:25.813 "bdev_name": "Malloc0" 00:05:25.813 }, 00:05:25.813 { 00:05:25.813 "nbd_device": "/dev/nbd1", 00:05:25.813 "bdev_name": "Malloc1" 00:05:25.813 } 00:05:25.813 ]' 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.813 { 00:05:25.813 "nbd_device": "/dev/nbd0", 00:05:25.813 "bdev_name": "Malloc0" 00:05:25.813 }, 00:05:25.813 { 00:05:25.813 "nbd_device": "/dev/nbd1", 00:05:25.813 "bdev_name": "Malloc1" 00:05:25.813 } 00:05:25.813 ]' 00:05:25.813 19:11:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.073 /dev/nbd1' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.073 /dev/nbd1' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.073 256+0 records in 00:05:26.073 256+0 records out 00:05:26.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103264 s, 102 MB/s 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.073 256+0 records in 00:05:26.073 256+0 records out 00:05:26.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138861 s, 75.5 MB/s 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.073 256+0 records in 00:05:26.073 256+0 records out 00:05:26.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145589 s, 72.0 MB/s 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.073 19:11:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.332 19:11:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.332 19:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.591 19:11:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.591 19:11:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.850 19:11:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.109 [2024-07-15 19:11:37.740030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.109 [2024-07-15 19:11:37.776979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.109 [2024-07-15 19:11:37.776982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.109 [2024-07-15 19:11:37.818455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.109 [2024-07-15 19:11:37.818495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.410 19:11:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.410 19:11:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:30.410 spdk_app_start Round 2 00:05:30.410 19:11:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1432901 /var/tmp/spdk-nbd.sock 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1432901 ']' 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.410 19:11:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:30.410 19:11:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.410 Malloc0 00:05:30.410 19:11:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.410 Malloc1 00:05:30.410 19:11:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.410 19:11:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.673 /dev/nbd0 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.673 1+0 records in 00:05:30.673 1+0 records out 00:05:30.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177749 s, 23.0 MB/s 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.673 /dev/nbd1 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.673 1+0 records in 00:05:30.673 1+0 records out 00:05:30.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181114 s, 22.6 MB/s 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.673 19:11:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.673 19:11:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.931 { 00:05:30.931 "nbd_device": "/dev/nbd0", 00:05:30.931 "bdev_name": "Malloc0" 00:05:30.931 }, 00:05:30.931 { 00:05:30.931 "nbd_device": "/dev/nbd1", 00:05:30.931 "bdev_name": "Malloc1" 00:05:30.931 } 00:05:30.931 ]' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.931 { 00:05:30.931 "nbd_device": "/dev/nbd0", 00:05:30.931 "bdev_name": "Malloc0" 00:05:30.931 }, 00:05:30.931 { 00:05:30.931 "nbd_device": "/dev/nbd1", 00:05:30.931 "bdev_name": "Malloc1" 00:05:30.931 } 00:05:30.931 ]' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.931 /dev/nbd1' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.931 /dev/nbd1' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.931 19:11:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.932 256+0 records in 00:05:30.932 256+0 records out 00:05:30.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101721 s, 103 MB/s 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.932 256+0 records in 00:05:30.932 256+0 records out 00:05:30.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013694 s, 76.6 MB/s 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.932 256+0 records in 00:05:30.932 256+0 records out 00:05:30.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152322 s, 68.8 MB/s 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.932 19:11:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.190 19:11:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.448 19:11:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.707 19:11:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.707 19:11:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.965 19:11:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.965 [2024-07-15 19:11:42.794003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.223 [2024-07-15 19:11:42.831311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.223 [2024-07-15 19:11:42.831313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.223 [2024-07-15 19:11:42.871728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.223 [2024-07-15 19:11:42.871769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.509 19:11:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1432901 /var/tmp/spdk-nbd.sock 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1432901 ']' 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.509 19:11:45 event.app_repeat -- event/event.sh@39 -- # killprocess 1432901 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1432901 ']' 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1432901 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1432901 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1432901' 00:05:35.509 killing process with pid 1432901 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1432901 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1432901 00:05:35.509 spdk_app_start is called in Round 0. 00:05:35.509 Shutdown signal received, stop current app iteration 00:05:35.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:05:35.509 spdk_app_start is called in Round 1. 00:05:35.509 Shutdown signal received, stop current app iteration 00:05:35.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:05:35.509 spdk_app_start is called in Round 2. 00:05:35.509 Shutdown signal received, stop current app iteration 00:05:35.509 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 reinitialization... 00:05:35.509 spdk_app_start is called in Round 3. 00:05:35.509 Shutdown signal received, stop current app iteration 00:05:35.509 19:11:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.509 19:11:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.509 00:05:35.509 real 0m15.505s 00:05:35.509 user 0m33.642s 00:05:35.509 sys 0m2.330s 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.509 19:11:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.509 ************************************ 00:05:35.509 END TEST app_repeat 00:05:35.509 ************************************ 00:05:35.509 19:11:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.509 19:11:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.509 19:11:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:35.509 19:11:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.509 19:11:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.509 19:11:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.509 ************************************ 00:05:35.509 START TEST cpu_locks 00:05:35.509 ************************************ 00:05:35.509 19:11:46 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:35.509 * Looking for test storage... 00:05:35.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.509 19:11:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.509 19:11:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.509 19:11:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.509 19:11:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.509 19:11:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.509 19:11:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.509 19:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.509 ************************************ 00:05:35.509 START TEST default_locks 00:05:35.509 ************************************ 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1435674 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1435674 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1435674 ']' 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.509 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.509 [2024-07-15 19:11:46.228002] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:35.509 [2024-07-15 19:11:46.228046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435674 ] 00:05:35.509 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.509 [2024-07-15 19:11:46.257860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.509 [2024-07-15 19:11:46.282845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.509 [2024-07-15 19:11:46.324518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.768 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.768 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:35.768 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1435674 00:05:35.768 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1435674 00:05:35.768 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.335 lslocks: write error 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1435674 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1435674 ']' 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1435674 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1435674 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1435674' 00:05:36.335 killing process with pid 1435674 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1435674 00:05:36.335 19:11:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1435674 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1435674 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1435674 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1435674 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1435674 ']' 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1435674) - No such process 00:05:36.594 ERROR: process (pid: 1435674) is no longer running 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.594 19:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.595 19:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.595 19:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.595 19:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.595 00:05:36.595 real 0m1.111s 00:05:36.595 user 0m1.066s 00:05:36.595 sys 0m0.504s 00:05:36.595 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.595 19:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.595 ************************************ 00:05:36.595 END TEST default_locks 00:05:36.595 ************************************ 00:05:36.595 19:11:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:36.595 19:11:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.595 19:11:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.595 19:11:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.595 19:11:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.595 ************************************ 00:05:36.595 START TEST default_locks_via_rpc 00:05:36.595 ************************************ 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1435920 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1435920 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1435920 ']' 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.595 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.595 [2024-07-15 19:11:47.405158] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:36.595 [2024-07-15 19:11:47.405202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435920 ] 00:05:36.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.595 [2024-07-15 19:11:47.430357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.853 [2024-07-15 19:11:47.459180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.853 [2024-07-15 19:11:47.495844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.853 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.853 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.853 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.853 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.853 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.854 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.113 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.113 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1435920 00:05:37.113 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1435920 00:05:37.113 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1435920 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1435920 ']' 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1435920 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.371 19:11:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1435920 00:05:37.371 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.371 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.371 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1435920' 00:05:37.372 killing process with pid 1435920 00:05:37.372 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1435920 00:05:37.372 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1435920 00:05:37.631 00:05:37.631 real 0m0.973s 00:05:37.631 user 0m0.926s 00:05:37.631 sys 0m0.432s 00:05:37.631 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.631 19:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.631 ************************************ 00:05:37.631 END TEST default_locks_via_rpc 00:05:37.631 ************************************ 00:05:37.631 19:11:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.631 19:11:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:37.631 19:11:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.631 19:11:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.631 19:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.631 ************************************ 00:05:37.631 START TEST non_locking_app_on_locked_coremask 00:05:37.631 ************************************ 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1436174 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1436174 /var/tmp/spdk.sock 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1436174 ']' 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.631 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.631 [2024-07-15 19:11:48.442084] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:37.631 [2024-07-15 19:11:48.442121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436174 ] 00:05:37.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.631 [2024-07-15 19:11:48.468425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.891 [2024-07-15 19:11:48.496080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.891 [2024-07-15 19:11:48.537170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1436183 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1436183 /var/tmp/spdk2.sock 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1436183 ']' 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.891 19:11:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.150 [2024-07-15 19:11:48.775100] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:38.150 [2024-07-15 19:11:48.775148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436183 ] 00:05:38.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.150 [2024-07-15 19:11:48.802760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.150 [2024-07-15 19:11:48.845755] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.150 [2024-07-15 19:11:48.845773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.150 [2024-07-15 19:11:48.925193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.762 19:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.762 19:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:38.762 19:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1436174 00:05:38.762 19:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1436174 00:05:38.762 19:11:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.330 lslocks: write error 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1436174 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1436174 ']' 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1436174 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436174 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436174' 00:05:39.330 killing process with pid 1436174 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1436174 00:05:39.330 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1436174 00:05:40.266 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1436183 00:05:40.266 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1436183 ']' 00:05:40.266 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1436183 00:05:40.266 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.266 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436183 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436183' 00:05:40.267 killing process with pid 1436183 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1436183 00:05:40.267 19:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1436183 00:05:40.267 00:05:40.267 real 0m2.708s 00:05:40.267 user 0m2.828s 00:05:40.267 sys 0m0.885s 00:05:40.267 19:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.267 19:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.267 ************************************ 00:05:40.267 END TEST non_locking_app_on_locked_coremask 00:05:40.267 ************************************ 00:05:40.526 19:11:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:40.526 19:11:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.526 19:11:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.526 19:11:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.526 19:11:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.526 ************************************ 00:05:40.526 START TEST locking_app_on_unlocked_coremask 00:05:40.526 ************************************ 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1436673 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1436673 /var/tmp/spdk.sock 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1436673 ']' 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.526 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.526 [2024-07-15 19:11:51.212804] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:40.526 [2024-07-15 19:11:51.212843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436673 ] 00:05:40.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.526 [2024-07-15 19:11:51.238577] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.526 [2024-07-15 19:11:51.264941] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.526 [2024-07-15 19:11:51.264960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.526 [2024-07-15 19:11:51.305771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1436678 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1436678 /var/tmp/spdk2.sock 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1436678 ']' 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.785 19:11:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.785 [2024-07-15 19:11:51.537372] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:40.785 [2024-07-15 19:11:51.537421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436678 ] 00:05:40.785 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.785 [2024-07-15 19:11:51.566776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.785 [2024-07-15 19:11:51.614669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.044 [2024-07-15 19:11:51.696553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.611 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.611 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:41.611 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1436678 00:05:41.611 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1436678 00:05:41.611 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.178 lslocks: write error 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1436673 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1436673 ']' 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1436673 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436673 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436673' 00:05:42.178 killing process with pid 1436673 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1436673 00:05:42.178 19:11:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1436673 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1436678 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1436678 ']' 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1436678 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1436678 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1436678' 00:05:42.745 killing process with pid 1436678 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1436678 00:05:42.745 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1436678 00:05:43.313 00:05:43.313 real 0m2.731s 00:05:43.313 user 0m2.842s 00:05:43.313 sys 0m0.885s 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.313 ************************************ 00:05:43.313 END TEST locking_app_on_unlocked_coremask 00:05:43.313 ************************************ 00:05:43.313 19:11:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.313 19:11:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.313 19:11:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.313 19:11:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.313 19:11:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.313 ************************************ 00:05:43.313 START TEST locking_app_on_locked_coremask 00:05:43.313 ************************************ 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1437173 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1437173 /var/tmp/spdk.sock 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1437173 ']' 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.313 19:11:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.313 [2024-07-15 19:11:54.015911] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:43.313 [2024-07-15 19:11:54.015954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437173 ] 00:05:43.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.313 [2024-07-15 19:11:54.041927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.313 [2024-07-15 19:11:54.070581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.313 [2024-07-15 19:11:54.107919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1437180 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1437180 /var/tmp/spdk2.sock 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1437180 /var/tmp/spdk2.sock 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1437180 /var/tmp/spdk2.sock 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1437180 ']' 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.572 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.572 [2024-07-15 19:11:54.334263] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:43.572 [2024-07-15 19:11:54.334310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437180 ] 00:05:43.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.572 [2024-07-15 19:11:54.362836] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.572 [2024-07-15 19:11:54.411056] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1437173 has claimed it. 00:05:43.572 [2024-07-15 19:11:54.411089] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1437180) - No such process 00:05:44.149 ERROR: process (pid: 1437180) is no longer running 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1437173 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1437173 00:05:44.149 19:11:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.715 lslocks: write error 00:05:44.715 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1437173 00:05:44.715 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1437173 ']' 00:05:44.715 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1437173 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437173 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437173' 00:05:44.716 killing process with pid 1437173 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1437173 00:05:44.716 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1437173 00:05:45.281 00:05:45.281 real 0m1.881s 00:05:45.281 user 0m1.999s 00:05:45.281 sys 0m0.617s 00:05:45.281 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.281 19:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.281 ************************************ 00:05:45.281 END TEST locking_app_on_locked_coremask 00:05:45.281 ************************************ 00:05:45.281 19:11:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.281 19:11:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:45.281 19:11:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.281 19:11:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.281 19:11:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.281 ************************************ 00:05:45.281 START TEST locking_overlapped_coremask 00:05:45.281 ************************************ 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1437536 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1437536 /var/tmp/spdk.sock 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1437536 ']' 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.281 19:11:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.281 [2024-07-15 19:11:55.964244] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:45.281 [2024-07-15 19:11:55.964288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437536 ] 00:05:45.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.281 [2024-07-15 19:11:55.989287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.281 [2024-07-15 19:11:56.018204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.281 [2024-07-15 19:11:56.059430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.281 [2024-07-15 19:11:56.059519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.281 [2024-07-15 19:11:56.059529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1437664 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1437664 /var/tmp/spdk2.sock 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1437664 /var/tmp/spdk2.sock 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1437664 /var/tmp/spdk2.sock 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1437664 ']' 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.539 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.539 [2024-07-15 19:11:56.305402] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:45.539 [2024-07-15 19:11:56.305447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437664 ] 00:05:45.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.539 [2024-07-15 19:11:56.332111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.539 [2024-07-15 19:11:56.380182] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1437536 has claimed it. 00:05:45.539 [2024-07-15 19:11:56.380211] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1437664) - No such process 00:05:46.104 ERROR: process (pid: 1437664) is no longer running 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1437536 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1437536 ']' 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1437536 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.104 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437536 00:05:46.362 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.362 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.362 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437536' 00:05:46.362 killing process with pid 1437536 00:05:46.362 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1437536 00:05:46.362 19:11:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1437536 00:05:46.620 00:05:46.621 real 0m1.359s 00:05:46.621 user 0m3.662s 00:05:46.621 sys 0m0.385s 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.621 ************************************ 00:05:46.621 END TEST locking_overlapped_coremask 00:05:46.621 ************************************ 00:05:46.621 19:11:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.621 19:11:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:46.621 19:11:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.621 19:11:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.621 19:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.621 ************************************ 00:05:46.621 START TEST locking_overlapped_coremask_via_rpc 00:05:46.621 ************************************ 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1437827 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1437827 /var/tmp/spdk.sock 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1437827 ']' 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.621 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.621 [2024-07-15 19:11:57.384689] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:46.621 [2024-07-15 19:11:57.384728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437827 ] 00:05:46.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.621 [2024-07-15 19:11:57.411505] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.621 [2024-07-15 19:11:57.439304] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.621 [2024-07-15 19:11:57.439323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.879 [2024-07-15 19:11:57.482431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.879 [2024-07-15 19:11:57.482530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.879 [2024-07-15 19:11:57.482532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1437925 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1437925 /var/tmp/spdk2.sock 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1437925 ']' 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.879 19:11:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.879 [2024-07-15 19:11:57.718164] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:46.879 [2024-07-15 19:11:57.718211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437925 ] 00:05:47.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.136 [2024-07-15 19:11:57.747112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.136 [2024-07-15 19:11:57.795542] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.136 [2024-07-15 19:11:57.795568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.136 [2024-07-15 19:11:57.881100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.136 [2024-07-15 19:11:57.881215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.136 [2024-07-15 19:11:57.881216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.703 [2024-07-15 19:11:58.525295] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1437827 has claimed it. 00:05:47.703 request: 00:05:47.703 { 00:05:47.703 "method": "framework_enable_cpumask_locks", 00:05:47.703 "req_id": 1 00:05:47.703 } 00:05:47.703 Got JSON-RPC error response 00:05:47.703 response: 00:05:47.703 { 00:05:47.703 "code": -32603, 00:05:47.703 "message": "Failed to claim CPU core: 2" 00:05:47.703 } 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1437827 /var/tmp/spdk.sock 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1437827 ']' 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.703 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.962 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1437925 /var/tmp/spdk2.sock 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1437925 ']' 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.963 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.222 00:05:48.222 real 0m1.593s 00:05:48.222 user 0m0.745s 00:05:48.222 sys 0m0.137s 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.222 19:11:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.222 ************************************ 00:05:48.222 END TEST locking_overlapped_coremask_via_rpc 00:05:48.222 ************************************ 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:48.222 19:11:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.222 19:11:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1437827 ]] 00:05:48.222 19:11:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1437827 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1437827 ']' 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1437827 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.222 19:11:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437827 00:05:48.222 19:11:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.222 19:11:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.222 19:11:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437827' 00:05:48.222 killing process with pid 1437827 00:05:48.222 19:11:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1437827 00:05:48.222 19:11:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1437827 00:05:48.481 19:11:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1437925 ]] 00:05:48.481 19:11:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1437925 00:05:48.481 19:11:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1437925 ']' 00:05:48.481 19:11:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1437925 00:05:48.481 19:11:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:48.481 19:11:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.481 19:11:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437925 00:05:48.739 19:11:59 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:48.739 19:11:59 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:48.739 19:11:59 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437925' 00:05:48.739 killing process with pid 1437925 00:05:48.739 19:11:59 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1437925 00:05:48.739 19:11:59 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1437925 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1437827 ]] 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1437827 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1437827 ']' 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1437827 00:05:48.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1437827) - No such process 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1437827 is not found' 00:05:48.998 Process with pid 1437827 is not found 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1437925 ]] 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1437925 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1437925 ']' 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1437925 00:05:48.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1437925) - No such process 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1437925 is not found' 00:05:48.998 Process with pid 1437925 is not found 00:05:48.998 19:11:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.998 00:05:48.998 real 0m13.616s 00:05:48.998 user 0m23.254s 00:05:48.998 sys 0m4.705s 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.998 19:11:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.998 ************************************ 00:05:48.998 END TEST cpu_locks 00:05:48.998 ************************************ 00:05:48.999 19:11:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.999 00:05:48.999 real 0m37.234s 00:05:48.999 user 1m9.973s 00:05:48.999 sys 0m7.927s 00:05:48.999 19:11:59 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.999 19:11:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.999 ************************************ 00:05:48.999 END TEST event 00:05:48.999 ************************************ 00:05:48.999 19:11:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.999 19:11:59 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:48.999 19:11:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.999 19:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.999 19:11:59 -- common/autotest_common.sh@10 -- # set +x 00:05:48.999 ************************************ 00:05:48.999 START TEST thread 00:05:48.999 ************************************ 00:05:48.999 19:11:59 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.258 * Looking for test storage... 00:05:49.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:49.258 19:11:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.258 19:11:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:49.258 19:11:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.258 19:11:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.258 ************************************ 00:05:49.258 START TEST thread_poller_perf 00:05:49.258 ************************************ 00:05:49.258 19:11:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.258 [2024-07-15 19:11:59.923490] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:49.258 [2024-07-15 19:11:59.923550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438283 ] 00:05:49.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.258 [2024-07-15 19:11:59.952456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.258 [2024-07-15 19:11:59.981291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.258 [2024-07-15 19:12:00.022710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.258 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:50.636 ====================================== 00:05:50.636 busy:2306193700 (cyc) 00:05:50.636 total_run_count: 390000 00:05:50.636 tsc_hz: 2300000000 (cyc) 00:05:50.636 ====================================== 00:05:50.636 poller_cost: 5913 (cyc), 2570 (nsec) 00:05:50.636 00:05:50.636 real 0m1.188s 00:05:50.636 user 0m1.106s 00:05:50.636 sys 0m0.077s 00:05:50.636 19:12:01 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.636 19:12:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.636 ************************************ 00:05:50.636 END TEST thread_poller_perf 00:05:50.636 ************************************ 00:05:50.636 19:12:01 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:50.637 19:12:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.637 19:12:01 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:50.637 19:12:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.637 19:12:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.637 ************************************ 00:05:50.637 START TEST thread_poller_perf 00:05:50.637 ************************************ 00:05:50.637 19:12:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.637 [2024-07-15 19:12:01.177037] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:50.637 [2024-07-15 19:12:01.177104] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438589 ] 00:05:50.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.637 [2024-07-15 19:12:01.206579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.637 [2024-07-15 19:12:01.234922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.637 [2024-07-15 19:12:01.273938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.637 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.572 ====================================== 00:05:51.572 busy:2301648024 (cyc) 00:05:51.572 total_run_count: 5253000 00:05:51.572 tsc_hz: 2300000000 (cyc) 00:05:51.572 ====================================== 00:05:51.572 poller_cost: 438 (cyc), 190 (nsec) 00:05:51.572 00:05:51.572 real 0m1.180s 00:05:51.572 user 0m1.104s 00:05:51.572 sys 0m0.072s 00:05:51.572 19:12:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.572 19:12:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.572 ************************************ 00:05:51.572 END TEST thread_poller_perf 00:05:51.572 ************************************ 00:05:51.572 19:12:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:51.572 19:12:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:51.572 00:05:51.572 real 0m2.588s 00:05:51.572 user 0m2.297s 00:05:51.572 sys 0m0.297s 00:05:51.572 19:12:02 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.572 19:12:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.572 ************************************ 00:05:51.572 END TEST thread 00:05:51.572 ************************************ 00:05:51.572 19:12:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.572 19:12:02 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:51.572 19:12:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.572 19:12:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.572 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.831 ************************************ 00:05:51.831 START TEST accel 00:05:51.831 ************************************ 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:51.831 * Looking for test storage... 00:05:51.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:51.831 19:12:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:51.831 19:12:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:51.831 19:12:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.831 19:12:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1438933 00:05:51.831 19:12:02 accel -- accel/accel.sh@63 -- # waitforlisten 1438933 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@829 -- # '[' -z 1438933 ']' 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.831 19:12:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.831 19:12:02 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:51.831 19:12:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:51.831 19:12:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.831 19:12:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.831 19:12:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.831 19:12:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.831 19:12:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.831 19:12:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:51.831 19:12:02 accel -- accel/accel.sh@41 -- # jq -r . 00:05:51.831 [2024-07-15 19:12:02.581688] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:51.831 [2024-07-15 19:12:02.581735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438933 ] 00:05:51.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.831 [2024-07-15 19:12:02.608496] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.831 [2024-07-15 19:12:02.636722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.831 [2024-07-15 19:12:02.677961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.089 19:12:02 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.089 19:12:02 accel -- common/autotest_common.sh@862 -- # return 0 00:05:52.089 19:12:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:52.089 19:12:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:52.090 19:12:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:52.090 19:12:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:52.090 19:12:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:52.090 19:12:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:52.090 19:12:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:52.090 19:12:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:52.090 19:12:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:52.090 19:12:02 accel -- accel/accel.sh@75 -- # killprocess 1438933 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@948 -- # '[' -z 1438933 ']' 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@952 -- # kill -0 1438933 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@953 -- # uname 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.090 19:12:02 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1438933 00:05:52.347 19:12:02 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.347 19:12:02 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.347 19:12:02 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1438933' 00:05:52.347 killing process with pid 1438933 00:05:52.347 19:12:02 accel -- common/autotest_common.sh@967 -- # kill 1438933 00:05:52.347 19:12:02 accel -- common/autotest_common.sh@972 -- # wait 1438933 00:05:52.622 19:12:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:52.622 19:12:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.622 19:12:03 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:52.622 19:12:03 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:52.622 19:12:03 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.622 19:12:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.622 19:12:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.622 19:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.622 ************************************ 00:05:52.622 START TEST accel_missing_filename 00:05:52.622 ************************************ 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.622 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:52.622 19:12:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:52.622 [2024-07-15 19:12:03.385065] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:52.622 [2024-07-15 19:12:03.385130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439192 ] 00:05:52.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.622 [2024-07-15 19:12:03.412959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.622 [2024-07-15 19:12:03.440502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.917 [2024-07-15 19:12:03.480878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.917 [2024-07-15 19:12:03.522319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.917 [2024-07-15 19:12:03.582354] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:52.917 A filename is required. 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.917 00:05:52.917 real 0m0.287s 00:05:52.917 user 0m0.212s 00:05:52.917 sys 0m0.113s 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.917 19:12:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:52.917 ************************************ 00:05:52.917 END TEST accel_missing_filename 00:05:52.917 ************************************ 00:05:52.917 19:12:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.917 19:12:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.917 19:12:03 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:52.917 19:12:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.917 19:12:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.917 ************************************ 00:05:52.918 START TEST accel_compress_verify 00:05:52.918 ************************************ 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.918 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:52.918 19:12:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:52.918 [2024-07-15 19:12:03.730574] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:52.918 [2024-07-15 19:12:03.730620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439217 ] 00:05:52.918 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.918 [2024-07-15 19:12:03.758134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.176 [2024-07-15 19:12:03.785273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.176 [2024-07-15 19:12:03.825305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.176 [2024-07-15 19:12:03.866764] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.176 [2024-07-15 19:12:03.926659] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:53.176 00:05:53.176 Compression does not support the verify option, aborting. 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.176 00:05:53.176 real 0m0.286s 00:05:53.176 user 0m0.209s 00:05:53.176 sys 0m0.113s 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.176 19:12:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:53.176 ************************************ 00:05:53.176 END TEST accel_compress_verify 00:05:53.176 ************************************ 00:05:53.176 19:12:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.176 19:12:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:53.176 19:12:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.176 19:12:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.176 19:12:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.435 ************************************ 00:05:53.435 START TEST accel_wrong_workload 00:05:53.435 ************************************ 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:53.435 19:12:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:53.435 Unsupported workload type: foobar 00:05:53.435 [2024-07-15 19:12:04.076050] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:53.435 accel_perf options: 00:05:53.435 [-h help message] 00:05:53.435 [-q queue depth per core] 00:05:53.435 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.435 [-T number of threads per core 00:05:53.435 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.435 [-t time in seconds] 00:05:53.435 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.435 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:53.435 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.435 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.435 [-S for crc32c workload, use this seed value (default 0) 00:05:53.435 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.435 [-f for fill workload, use this BYTE value (default 255) 00:05:53.435 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.435 [-y verify result if this switch is on] 00:05:53.435 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.435 Can be used to spread operations across a wider range of memory. 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.435 00:05:53.435 real 0m0.030s 00:05:53.435 user 0m0.020s 00:05:53.435 sys 0m0.010s 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.435 19:12:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:53.435 ************************************ 00:05:53.435 END TEST accel_wrong_workload 00:05:53.435 ************************************ 00:05:53.435 Error: writing output failed: Broken pipe 00:05:53.435 19:12:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.435 19:12:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.435 19:12:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:53.435 19:12:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.435 19:12:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.435 ************************************ 00:05:53.435 START TEST accel_negative_buffers 00:05:53.435 ************************************ 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.435 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:53.435 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:53.435 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:53.435 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.435 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.435 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.436 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.436 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.436 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:53.436 19:12:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:53.436 -x option must be non-negative. 00:05:53.436 [2024-07-15 19:12:04.163246] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:53.436 accel_perf options: 00:05:53.436 [-h help message] 00:05:53.436 [-q queue depth per core] 00:05:53.436 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.436 [-T number of threads per core 00:05:53.436 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.436 [-t time in seconds] 00:05:53.436 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.436 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:53.436 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.436 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.436 [-S for crc32c workload, use this seed value (default 0) 00:05:53.436 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.436 [-f for fill workload, use this BYTE value (default 255) 00:05:53.436 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.436 [-y verify result if this switch is on] 00:05:53.436 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.436 Can be used to spread operations across a wider range of memory. 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.436 00:05:53.436 real 0m0.030s 00:05:53.436 user 0m0.019s 00:05:53.436 sys 0m0.011s 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.436 19:12:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:53.436 ************************************ 00:05:53.436 END TEST accel_negative_buffers 00:05:53.436 ************************************ 00:05:53.436 Error: writing output failed: Broken pipe 00:05:53.436 19:12:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.436 19:12:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:53.436 19:12:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.436 19:12:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.436 19:12:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.436 ************************************ 00:05:53.436 START TEST accel_crc32c 00:05:53.436 ************************************ 00:05:53.436 19:12:04 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:53.436 19:12:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:53.436 [2024-07-15 19:12:04.256384] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:53.436 [2024-07-15 19:12:04.256459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439437 ] 00:05:53.436 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.436 [2024-07-15 19:12:04.284194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.694 [2024-07-15 19:12:04.311500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.694 [2024-07-15 19:12:04.351647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:53.694 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.695 19:12:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.069 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.070 19:12:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:55.070 19:12:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.070 00:05:55.070 real 0m1.294s 00:05:55.070 user 0m1.190s 00:05:55.070 sys 0m0.119s 00:05:55.070 19:12:05 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.070 19:12:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:55.070 ************************************ 00:05:55.070 END TEST accel_crc32c 00:05:55.070 ************************************ 00:05:55.070 19:12:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.070 19:12:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:55.070 19:12:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:55.070 19:12:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.070 19:12:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.070 ************************************ 00:05:55.070 START TEST accel_crc32c_C2 00:05:55.070 ************************************ 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:55.070 [2024-07-15 19:12:05.615369] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:55.070 [2024-07-15 19:12:05.615424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439671 ] 00:05:55.070 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.070 [2024-07-15 19:12:05.644482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.070 [2024-07-15 19:12:05.672912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.070 [2024-07-15 19:12:05.712563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.070 19:12:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.446 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.447 00:05:56.447 real 0m1.297s 00:05:56.447 user 0m1.196s 00:05:56.447 sys 0m0.115s 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.447 19:12:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:56.447 ************************************ 00:05:56.447 END TEST accel_crc32c_C2 00:05:56.447 ************************************ 00:05:56.447 19:12:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.447 19:12:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:56.447 19:12:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:56.447 19:12:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.447 19:12:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.447 ************************************ 00:05:56.447 START TEST accel_copy 00:05:56.447 ************************************ 00:05:56.447 19:12:06 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:56.447 19:12:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:56.447 [2024-07-15 19:12:06.978206] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:56.447 [2024-07-15 19:12:06.978263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439916 ] 00:05:56.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.447 [2024-07-15 19:12:07.006028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.447 [2024-07-15 19:12:07.033151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.447 [2024-07-15 19:12:07.073043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.447 19:12:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.386 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.386 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.386 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:57.646 19:12:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.646 00:05:57.646 real 0m1.293s 00:05:57.646 user 0m1.196s 00:05:57.646 sys 0m0.111s 00:05:57.646 19:12:08 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.646 19:12:08 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 END TEST accel_copy 00:05:57.646 ************************************ 00:05:57.646 19:12:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.646 19:12:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.646 19:12:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:57.646 19:12:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.646 19:12:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.646 ************************************ 00:05:57.646 START TEST accel_fill 00:05:57.646 ************************************ 00:05:57.646 19:12:08 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:57.646 [2024-07-15 19:12:08.325273] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:57.646 [2024-07-15 19:12:08.325310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440309 ] 00:05:57.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.646 [2024-07-15 19:12:08.350832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.646 [2024-07-15 19:12:08.378287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.646 [2024-07-15 19:12:08.418509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.646 19:12:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:59.026 19:12:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.026 00:05:59.026 real 0m1.282s 00:05:59.026 user 0m1.187s 00:05:59.026 sys 0m0.108s 00:05:59.026 19:12:09 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.026 19:12:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:59.026 ************************************ 00:05:59.026 END TEST accel_fill 00:05:59.026 ************************************ 00:05:59.026 19:12:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.026 19:12:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:59.026 19:12:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.026 19:12:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.026 19:12:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.026 ************************************ 00:05:59.026 START TEST accel_copy_crc32c 00:05:59.026 ************************************ 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:59.026 [2024-07-15 19:12:09.670505] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:05:59.026 [2024-07-15 19:12:09.670560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440746 ] 00:05:59.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.026 [2024-07-15 19:12:09.699534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.026 [2024-07-15 19:12:09.727301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.026 [2024-07-15 19:12:09.766732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.026 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:59.027 19:12:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.404 00:06:00.404 real 0m1.295s 00:06:00.404 user 0m1.186s 00:06:00.404 sys 0m0.121s 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.404 19:12:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:00.404 ************************************ 00:06:00.404 END TEST accel_copy_crc32c 00:06:00.404 ************************************ 00:06:00.404 19:12:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.404 19:12:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.404 19:12:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:00.404 19:12:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.404 19:12:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.404 ************************************ 00:06:00.404 START TEST accel_copy_crc32c_C2 00:06:00.404 ************************************ 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:00.404 [2024-07-15 19:12:11.032076] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:00.404 [2024-07-15 19:12:11.032123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441014 ] 00:06:00.404 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.404 [2024-07-15 19:12:11.059701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.404 [2024-07-15 19:12:11.086984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.404 [2024-07-15 19:12:11.126108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.404 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.405 19:12:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.783 00:06:01.783 real 0m1.293s 00:06:01.783 user 0m1.195s 00:06:01.783 sys 0m0.113s 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.783 19:12:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:01.783 ************************************ 00:06:01.783 END TEST accel_copy_crc32c_C2 00:06:01.783 ************************************ 00:06:01.783 19:12:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.783 19:12:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:01.783 19:12:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.783 19:12:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.783 19:12:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.783 ************************************ 00:06:01.783 START TEST accel_dualcast 00:06:01.783 ************************************ 00:06:01.783 19:12:12 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:01.783 [2024-07-15 19:12:12.391779] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:01.783 [2024-07-15 19:12:12.391843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441250 ] 00:06:01.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.783 [2024-07-15 19:12:12.420564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.783 [2024-07-15 19:12:12.449928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.783 [2024-07-15 19:12:12.489741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.783 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.784 19:12:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:03.161 19:12:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.161 00:06:03.161 real 0m1.297s 00:06:03.161 user 0m1.193s 00:06:03.161 sys 0m0.118s 00:06:03.161 19:12:13 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.161 19:12:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:03.161 ************************************ 00:06:03.161 END TEST accel_dualcast 00:06:03.161 ************************************ 00:06:03.161 19:12:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.161 19:12:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:03.161 19:12:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.161 19:12:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.161 19:12:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.161 ************************************ 00:06:03.161 START TEST accel_compare 00:06:03.161 ************************************ 00:06:03.161 19:12:13 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:03.161 [2024-07-15 19:12:13.758302] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:03.161 [2024-07-15 19:12:13.758379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441483 ] 00:06:03.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.161 [2024-07-15 19:12:13.787299] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.161 [2024-07-15 19:12:13.816186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.161 [2024-07-15 19:12:13.858561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.161 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.162 19:12:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.536 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.536 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:04.537 19:12:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.537 00:06:04.537 real 0m1.301s 00:06:04.537 user 0m1.200s 00:06:04.537 sys 0m0.114s 00:06:04.537 19:12:15 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.537 19:12:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:04.537 ************************************ 00:06:04.537 END TEST accel_compare 00:06:04.537 ************************************ 00:06:04.537 19:12:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.537 19:12:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:04.537 19:12:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.537 19:12:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.537 19:12:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.537 ************************************ 00:06:04.537 START TEST accel_xor 00:06:04.537 ************************************ 00:06:04.537 19:12:15 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:04.537 [2024-07-15 19:12:15.121245] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:04.537 [2024-07-15 19:12:15.121300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441737 ] 00:06:04.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.537 [2024-07-15 19:12:15.148765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.537 [2024-07-15 19:12:15.176522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.537 [2024-07-15 19:12:15.215564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.537 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.538 19:12:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.915 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.916 00:06:05.916 real 0m1.288s 00:06:05.916 user 0m1.191s 00:06:05.916 sys 0m0.111s 00:06:05.916 19:12:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.916 19:12:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.916 ************************************ 00:06:05.916 END TEST accel_xor 00:06:05.916 ************************************ 00:06:05.916 19:12:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.916 19:12:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:05.916 19:12:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.916 19:12:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.916 19:12:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.916 ************************************ 00:06:05.916 START TEST accel_xor 00:06:05.916 ************************************ 00:06:05.916 19:12:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:05.916 [2024-07-15 19:12:16.476005] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:05.916 [2024-07-15 19:12:16.476062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441969 ] 00:06:05.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.916 [2024-07-15 19:12:16.503308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.916 [2024-07-15 19:12:16.531102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.916 [2024-07-15 19:12:16.570234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.916 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.917 19:12:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.917 19:12:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.917 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.917 19:12:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.296 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:07.297 19:12:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.297 00:06:07.297 real 0m1.288s 00:06:07.297 user 0m1.187s 00:06:07.297 sys 0m0.116s 00:06:07.297 19:12:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.297 19:12:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:07.297 ************************************ 00:06:07.297 END TEST accel_xor 00:06:07.297 ************************************ 00:06:07.297 19:12:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.297 19:12:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:07.297 19:12:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:07.297 19:12:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.297 19:12:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.297 ************************************ 00:06:07.297 START TEST accel_dif_verify 00:06:07.297 ************************************ 00:06:07.297 19:12:17 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.297 [2024-07-15 19:12:17.834245] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:07.297 [2024-07-15 19:12:17.834295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442200 ] 00:06:07.297 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.297 [2024-07-15 19:12:17.862312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.297 [2024-07-15 19:12:17.888961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.297 [2024-07-15 19:12:17.928336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.297 19:12:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.676 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:08.677 19:12:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.677 00:06:08.677 real 0m1.292s 00:06:08.677 user 0m1.195s 00:06:08.677 sys 0m0.113s 00:06:08.677 19:12:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.677 19:12:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.677 ************************************ 00:06:08.677 END TEST accel_dif_verify 00:06:08.677 ************************************ 00:06:08.677 19:12:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.677 19:12:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:08.677 19:12:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.677 19:12:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.677 19:12:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.677 ************************************ 00:06:08.677 START TEST accel_dif_generate 00:06:08.677 ************************************ 00:06:08.677 19:12:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:08.677 [2024-07-15 19:12:19.196937] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:08.677 [2024-07-15 19:12:19.197002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442441 ] 00:06:08.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.677 [2024-07-15 19:12:19.224797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.677 [2024-07-15 19:12:19.251900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.677 [2024-07-15 19:12:19.291594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.677 19:12:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:09.614 19:12:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.614 00:06:09.614 real 0m1.294s 00:06:09.614 user 0m1.197s 00:06:09.614 sys 0m0.113s 00:06:09.614 19:12:20 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.614 19:12:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:09.614 ************************************ 00:06:09.614 END TEST accel_dif_generate 00:06:09.614 ************************************ 00:06:09.874 19:12:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.874 19:12:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.874 19:12:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:09.874 19:12:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.874 19:12:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.874 ************************************ 00:06:09.874 START TEST accel_dif_generate_copy 00:06:09.874 ************************************ 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:09.874 [2024-07-15 19:12:20.556306] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:09.874 [2024-07-15 19:12:20.556365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442682 ] 00:06:09.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.874 [2024-07-15 19:12:20.585449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.874 [2024-07-15 19:12:20.613270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.874 [2024-07-15 19:12:20.653276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.874 19:12:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.279 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.280 00:06:11.280 real 0m1.296s 00:06:11.280 user 0m1.192s 00:06:11.280 sys 0m0.117s 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.280 19:12:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.280 ************************************ 00:06:11.280 END TEST accel_dif_generate_copy 00:06:11.280 ************************************ 00:06:11.280 19:12:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.280 19:12:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:11.280 19:12:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.280 19:12:21 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:11.280 19:12:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.280 19:12:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.280 ************************************ 00:06:11.280 START TEST accel_comp 00:06:11.280 ************************************ 00:06:11.280 19:12:21 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:11.280 19:12:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:11.280 [2024-07-15 19:12:21.911830] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:11.280 [2024-07-15 19:12:21.911881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442927 ] 00:06:11.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.280 [2024-07-15 19:12:21.939632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.280 [2024-07-15 19:12:21.967598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.280 [2024-07-15 19:12:22.006837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.280 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.281 19:12:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.654 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:12.655 19:12:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.655 00:06:12.655 real 0m1.290s 00:06:12.655 user 0m1.199s 00:06:12.655 sys 0m0.106s 00:06:12.655 19:12:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.655 19:12:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 ************************************ 00:06:12.655 END TEST accel_comp 00:06:12.655 ************************************ 00:06:12.655 19:12:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.655 19:12:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.655 19:12:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.655 19:12:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.655 19:12:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 ************************************ 00:06:12.655 START TEST accel_decomp 00:06:12.655 ************************************ 00:06:12.655 19:12:23 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:12.655 [2024-07-15 19:12:23.276389] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:12.655 [2024-07-15 19:12:23.276465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443176 ] 00:06:12.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.655 [2024-07-15 19:12:23.306106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.655 [2024-07-15 19:12:23.334063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.655 [2024-07-15 19:12:23.374212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.655 19:12:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.031 19:12:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.031 00:06:14.031 real 0m1.301s 00:06:14.031 user 0m1.196s 00:06:14.031 sys 0m0.120s 00:06:14.031 19:12:24 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.031 19:12:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:14.031 ************************************ 00:06:14.031 END TEST accel_decomp 00:06:14.031 ************************************ 00:06:14.031 19:12:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.031 19:12:24 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.031 19:12:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:14.031 19:12:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.031 19:12:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.031 ************************************ 00:06:14.031 START TEST accel_decomp_full 00:06:14.031 ************************************ 00:06:14.031 19:12:24 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:14.031 19:12:24 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:14.032 [2024-07-15 19:12:24.643343] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:14.032 [2024-07-15 19:12:24.643412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443421 ] 00:06:14.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.032 [2024-07-15 19:12:24.672418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.032 [2024-07-15 19:12:24.699652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.032 [2024-07-15 19:12:24.739454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.032 19:12:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.409 19:12:25 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.409 00:06:15.409 real 0m1.305s 00:06:15.409 user 0m1.210s 00:06:15.409 sys 0m0.109s 00:06:15.409 19:12:25 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.409 19:12:25 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 ************************************ 00:06:15.409 END TEST accel_decomp_full 00:06:15.409 ************************************ 00:06:15.409 19:12:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.409 19:12:25 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.409 19:12:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:15.409 19:12:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.409 19:12:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.409 ************************************ 00:06:15.409 START TEST accel_decomp_mcore 00:06:15.409 ************************************ 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:15.409 19:12:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:15.409 [2024-07-15 19:12:26.016294] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:15.409 [2024-07-15 19:12:26.016373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443668 ] 00:06:15.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.409 [2024-07-15 19:12:26.045615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.409 [2024-07-15 19:12:26.072968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.409 [2024-07-15 19:12:26.115973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.409 [2024-07-15 19:12:26.116071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.409 [2024-07-15 19:12:26.116174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.409 [2024-07-15 19:12:26.116175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.409 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.410 19:12:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.790 00:06:16.790 real 0m1.310s 00:06:16.790 user 0m4.534s 00:06:16.790 sys 0m0.120s 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.790 19:12:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:16.790 ************************************ 00:06:16.790 END TEST accel_decomp_mcore 00:06:16.790 ************************************ 00:06:16.790 19:12:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.790 19:12:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.790 19:12:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:16.790 19:12:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.790 19:12:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.790 ************************************ 00:06:16.790 START TEST accel_decomp_full_mcore 00:06:16.790 ************************************ 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:16.790 [2024-07-15 19:12:27.391621] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:16.790 [2024-07-15 19:12:27.391689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443925 ] 00:06:16.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.790 [2024-07-15 19:12:27.420740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.790 [2024-07-15 19:12:27.447726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.790 [2024-07-15 19:12:27.489498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.790 [2024-07-15 19:12:27.489596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.790 [2024-07-15 19:12:27.489664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.790 [2024-07-15 19:12:27.489665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:16.790 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 19:12:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.170 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.171 00:06:18.171 real 0m1.318s 00:06:18.171 user 0m4.560s 00:06:18.171 sys 0m0.127s 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.171 19:12:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:18.171 ************************************ 00:06:18.171 END TEST accel_decomp_full_mcore 00:06:18.171 ************************************ 00:06:18.171 19:12:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.171 19:12:28 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.171 19:12:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:18.171 19:12:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.171 19:12:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.171 ************************************ 00:06:18.171 START TEST accel_decomp_mthread 00:06:18.171 ************************************ 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:18.171 [2024-07-15 19:12:28.776046] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:18.171 [2024-07-15 19:12:28.776111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444177 ] 00:06:18.171 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.171 [2024-07-15 19:12:28.804571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.171 [2024-07-15 19:12:28.831887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.171 [2024-07-15 19:12:28.870764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.171 19:12:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.548 00:06:19.548 real 0m1.299s 00:06:19.548 user 0m1.199s 00:06:19.548 sys 0m0.115s 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.548 19:12:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:19.548 ************************************ 00:06:19.548 END TEST accel_decomp_mthread 00:06:19.548 ************************************ 00:06:19.548 19:12:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.548 19:12:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.548 19:12:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.548 19:12:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.548 19:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.548 ************************************ 00:06:19.548 START TEST accel_decomp_full_mthread 00:06:19.548 ************************************ 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:19.548 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:19.548 [2024-07-15 19:12:30.141691] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:19.548 [2024-07-15 19:12:30.141755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444424 ] 00:06:19.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.548 [2024-07-15 19:12:30.170335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.548 [2024-07-15 19:12:30.198020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.549 [2024-07-15 19:12:30.237660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.549 19:12:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.927 00:06:20.927 real 0m1.322s 00:06:20.927 user 0m1.218s 00:06:20.927 sys 0m0.118s 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.927 19:12:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:20.927 ************************************ 00:06:20.927 END TEST accel_decomp_full_mthread 00:06:20.927 ************************************ 00:06:20.928 19:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.928 19:12:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:20.928 19:12:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:20.928 19:12:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:20.928 19:12:31 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.928 19:12:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.928 19:12:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.928 19:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.928 19:12:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.928 19:12:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.928 19:12:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.928 19:12:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.928 19:12:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:20.928 19:12:31 accel -- accel/accel.sh@41 -- # jq -r . 00:06:20.928 ************************************ 00:06:20.928 START TEST accel_dif_functional_tests 00:06:20.928 ************************************ 00:06:20.928 19:12:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:20.928 [2024-07-15 19:12:31.546438] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:20.928 [2024-07-15 19:12:31.546474] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444677 ] 00:06:20.928 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.928 [2024-07-15 19:12:31.572272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.928 [2024-07-15 19:12:31.600908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.928 [2024-07-15 19:12:31.642017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.928 [2024-07-15 19:12:31.642038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.928 [2024-07-15 19:12:31.642039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.928 00:06:20.928 00:06:20.928 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.928 http://cunit.sourceforge.net/ 00:06:20.928 00:06:20.928 00:06:20.928 Suite: accel_dif 00:06:20.928 Test: verify: DIF generated, GUARD check ...passed 00:06:20.928 Test: verify: DIF generated, APPTAG check ...passed 00:06:20.928 Test: verify: DIF generated, REFTAG check ...passed 00:06:20.928 Test: verify: DIF not generated, GUARD check ...[2024-07-15 19:12:31.705504] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:20.928 passed 00:06:20.928 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:12:31.705553] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:20.928 passed 00:06:20.928 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 19:12:31.705588] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:20.928 passed 00:06:20.928 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:20.928 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:12:31.705631] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:20.928 passed 00:06:20.928 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:20.928 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:20.928 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:20.928 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 19:12:31.705724] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:20.928 passed 00:06:20.928 Test: verify copy: DIF generated, GUARD check ...passed 00:06:20.928 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:20.928 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:20.928 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 19:12:31.705828] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:20.928 passed 00:06:20.928 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:12:31.705851] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:20.928 passed 00:06:20.928 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:12:31.705869] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:20.928 passed 00:06:20.928 Test: generate copy: DIF generated, GUARD check ...passed 00:06:20.928 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:20.928 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:20.928 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:20.928 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:20.928 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:20.928 Test: generate copy: iovecs-len validate ...[2024-07-15 19:12:31.706024] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:20.928 passed 00:06:20.928 Test: generate copy: buffer alignment validate ...passed 00:06:20.928 00:06:20.928 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.928 suites 1 1 n/a 0 0 00:06:20.928 tests 26 26 26 0 0 00:06:20.928 asserts 115 115 115 0 n/a 00:06:20.928 00:06:20.928 Elapsed time = 0.000 seconds 00:06:21.187 00:06:21.187 real 0m0.362s 00:06:21.187 user 0m0.566s 00:06:21.187 sys 0m0.137s 00:06:21.187 19:12:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.187 19:12:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:21.187 ************************************ 00:06:21.187 END TEST accel_dif_functional_tests 00:06:21.187 ************************************ 00:06:21.187 19:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.187 00:06:21.187 real 0m29.459s 00:06:21.187 user 0m33.155s 00:06:21.187 sys 0m4.181s 00:06:21.187 19:12:31 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.187 19:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.187 ************************************ 00:06:21.187 END TEST accel 00:06:21.187 ************************************ 00:06:21.187 19:12:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.187 19:12:31 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:21.187 19:12:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.187 19:12:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.187 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:21.187 ************************************ 00:06:21.187 START TEST accel_rpc 00:06:21.187 ************************************ 00:06:21.187 19:12:31 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:21.447 * Looking for test storage... 00:06:21.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1444955 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1444955 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1444955 ']' 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 [2024-07-15 19:12:32.108142] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:21.447 [2024-07-15 19:12:32.108189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444955 ] 00:06:21.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.447 [2024-07-15 19:12:32.134735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.447 [2024-07-15 19:12:32.162617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.447 [2024-07-15 19:12:32.203160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:21.447 19:12:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.447 19:12:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 ************************************ 00:06:21.447 START TEST accel_assign_opcode 00:06:21.447 ************************************ 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 [2024-07-15 19:12:32.251571] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.447 [2024-07-15 19:12:32.259585] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.447 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.706 software 00:06:21.706 00:06:21.706 real 0m0.224s 00:06:21.706 user 0m0.047s 00:06:21.706 sys 0m0.006s 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.706 19:12:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:21.706 ************************************ 00:06:21.706 END TEST accel_assign_opcode 00:06:21.706 ************************************ 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:21.706 19:12:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1444955 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1444955 ']' 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1444955 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1444955 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1444955' 00:06:21.706 killing process with pid 1444955 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@967 -- # kill 1444955 00:06:21.706 19:12:32 accel_rpc -- common/autotest_common.sh@972 -- # wait 1444955 00:06:22.283 00:06:22.283 real 0m0.862s 00:06:22.283 user 0m0.801s 00:06:22.283 sys 0m0.362s 00:06:22.283 19:12:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.283 19:12:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.283 ************************************ 00:06:22.283 END TEST accel_rpc 00:06:22.283 ************************************ 00:06:22.283 19:12:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.283 19:12:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.283 19:12:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.283 19:12:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.283 19:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:22.283 ************************************ 00:06:22.283 START TEST app_cmdline 00:06:22.283 ************************************ 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.283 * Looking for test storage... 00:06:22.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:22.283 19:12:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.283 19:12:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1445083 00:06:22.283 19:12:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1445083 00:06:22.283 19:12:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1445083 ']' 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.283 19:12:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.283 [2024-07-15 19:12:33.048794] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:22.283 [2024-07-15 19:12:33.048846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445083 ] 00:06:22.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.283 [2024-07-15 19:12:33.074910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.283 [2024-07-15 19:12:33.103998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.541 [2024-07-15 19:12:33.143968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.541 19:12:33 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.541 19:12:33 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:22.541 19:12:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.800 { 00:06:22.800 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:06:22.800 "fields": { 00:06:22.800 "major": 24, 00:06:22.800 "minor": 9, 00:06:22.800 "patch": 0, 00:06:22.800 "suffix": "-pre", 00:06:22.800 "commit": "a95bbf233" 00:06:22.800 } 00:06:22.800 } 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.800 19:12:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.800 19:12:33 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.058 request: 00:06:23.058 { 00:06:23.058 "method": "env_dpdk_get_mem_stats", 00:06:23.058 "req_id": 1 00:06:23.058 } 00:06:23.058 Got JSON-RPC error response 00:06:23.058 response: 00:06:23.058 { 00:06:23.058 "code": -32601, 00:06:23.058 "message": "Method not found" 00:06:23.058 } 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.058 19:12:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1445083 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1445083 ']' 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1445083 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1445083 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1445083' 00:06:23.058 killing process with pid 1445083 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@967 -- # kill 1445083 00:06:23.058 19:12:33 app_cmdline -- common/autotest_common.sh@972 -- # wait 1445083 00:06:23.317 00:06:23.317 real 0m1.150s 00:06:23.317 user 0m1.345s 00:06:23.317 sys 0m0.406s 00:06:23.317 19:12:34 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.317 19:12:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 END TEST app_cmdline 00:06:23.317 ************************************ 00:06:23.317 19:12:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.317 19:12:34 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.317 19:12:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.317 19:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.317 19:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.317 ************************************ 00:06:23.317 START TEST version 00:06:23.317 ************************************ 00:06:23.317 19:12:34 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.575 * Looking for test storage... 00:06:23.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.575 19:12:34 version -- app/version.sh@17 -- # get_header_version major 00:06:23.575 19:12:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # cut -f2 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.575 19:12:34 version -- app/version.sh@17 -- # major=24 00:06:23.575 19:12:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.575 19:12:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # cut -f2 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.575 19:12:34 version -- app/version.sh@18 -- # minor=9 00:06:23.575 19:12:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.575 19:12:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # cut -f2 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.575 19:12:34 version -- app/version.sh@19 -- # patch=0 00:06:23.575 19:12:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.575 19:12:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # cut -f2 00:06:23.575 19:12:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.575 19:12:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.576 19:12:34 version -- app/version.sh@22 -- # version=24.9 00:06:23.576 19:12:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.576 19:12:34 version -- app/version.sh@28 -- # version=24.9rc0 00:06:23.576 19:12:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.576 19:12:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.576 19:12:34 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:23.576 19:12:34 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:23.576 00:06:23.576 real 0m0.147s 00:06:23.576 user 0m0.080s 00:06:23.576 sys 0m0.102s 00:06:23.576 19:12:34 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.576 19:12:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.576 ************************************ 00:06:23.576 END TEST version 00:06:23.576 ************************************ 00:06:23.576 19:12:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.576 19:12:34 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@198 -- # uname -s 00:06:23.576 19:12:34 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:23.576 19:12:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.576 19:12:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.576 19:12:34 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:23.576 19:12:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.576 19:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.576 19:12:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:23.576 19:12:34 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:23.576 19:12:34 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.576 19:12:34 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:23.576 19:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.576 19:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:23.576 ************************************ 00:06:23.576 START TEST nvmf_tcp 00:06:23.576 ************************************ 00:06:23.576 19:12:34 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.576 * Looking for test storage... 00:06:23.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.576 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.576 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.576 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.836 19:12:34 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.836 19:12:34 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.836 19:12:34 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.836 19:12:34 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.836 19:12:34 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.836 19:12:34 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.836 19:12:34 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:23.836 19:12:34 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:23.836 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:23.836 19:12:34 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.837 19:12:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.837 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:23.837 19:12:34 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.837 19:12:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:23.837 19:12:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.837 19:12:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.837 ************************************ 00:06:23.837 START TEST nvmf_example 00:06:23.837 ************************************ 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.837 * Looking for test storage... 00:06:23.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.837 19:12:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:29.113 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:29.114 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:29.114 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:29.114 Found net devices under 0000:86:00.0: cvl_0_0 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:29.114 Found net devices under 0000:86:00.1: cvl_0_1 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:29.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:06:29.114 00:06:29.114 --- 10.0.0.2 ping statistics --- 00:06:29.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.114 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:06:29.114 00:06:29.114 --- 10.0.0.1 ping statistics --- 00:06:29.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.114 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1448426 00:06:29.114 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1448426 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1448426 ']' 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.115 19:12:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:30.090 19:12:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:30.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.302 Initializing NVMe Controllers 00:06:42.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:42.302 Initialization complete. Launching workers. 00:06:42.302 ======================================================== 00:06:42.302 Latency(us) 00:06:42.302 Device Information : IOPS MiB/s Average min max 00:06:42.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17847.20 69.72 3585.49 597.03 16055.41 00:06:42.302 ======================================================== 00:06:42.303 Total : 17847.20 69.72 3585.49 597.03 16055.41 00:06:42.303 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.303 rmmod nvme_tcp 00:06:42.303 rmmod nvme_fabrics 00:06:42.303 rmmod nvme_keyring 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1448426 ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1448426 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1448426 ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1448426 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448426 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448426' 00:06:42.303 killing process with pid 1448426 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1448426 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1448426 00:06:42.303 nvmf threads initialize successfully 00:06:42.303 bdev subsystem init successfully 00:06:42.303 created a nvmf target service 00:06:42.303 create targets's poll groups done 00:06:42.303 all subsystems of target started 00:06:42.303 nvmf target is running 00:06:42.303 all subsystems of target stopped 00:06:42.303 destroy targets's poll groups done 00:06:42.303 destroyed the nvmf target service 00:06:42.303 bdev subsystem finish successfully 00:06:42.303 nvmf threads destroy successfully 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.303 19:12:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.561 19:12:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.561 19:12:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:42.561 19:12:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.561 19:12:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.824 00:06:42.824 real 0m18.950s 00:06:42.824 user 0m46.250s 00:06:42.824 sys 0m5.241s 00:06:42.824 19:12:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.824 19:12:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.824 ************************************ 00:06:42.824 END TEST nvmf_example 00:06:42.824 ************************************ 00:06:42.824 19:12:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:42.824 19:12:53 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.824 19:12:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.824 19:12:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.824 19:12:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.824 ************************************ 00:06:42.824 START TEST nvmf_filesystem 00:06:42.824 ************************************ 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.824 * Looking for test storage... 00:06:42.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:42.824 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:42.825 #define SPDK_CONFIG_H 00:06:42.825 #define SPDK_CONFIG_APPS 1 00:06:42.825 #define SPDK_CONFIG_ARCH native 00:06:42.825 #undef SPDK_CONFIG_ASAN 00:06:42.825 #undef SPDK_CONFIG_AVAHI 00:06:42.825 #undef SPDK_CONFIG_CET 00:06:42.825 #define SPDK_CONFIG_COVERAGE 1 00:06:42.825 #define SPDK_CONFIG_CROSS_PREFIX 00:06:42.825 #undef SPDK_CONFIG_CRYPTO 00:06:42.825 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:42.825 #undef SPDK_CONFIG_CUSTOMOCF 00:06:42.825 #undef SPDK_CONFIG_DAOS 00:06:42.825 #define SPDK_CONFIG_DAOS_DIR 00:06:42.825 #define SPDK_CONFIG_DEBUG 1 00:06:42.825 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:42.825 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:42.825 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:42.825 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:42.825 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:42.825 #undef SPDK_CONFIG_DPDK_UADK 00:06:42.825 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.825 #define SPDK_CONFIG_EXAMPLES 1 00:06:42.825 #undef SPDK_CONFIG_FC 00:06:42.825 #define SPDK_CONFIG_FC_PATH 00:06:42.825 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:42.825 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:42.825 #undef SPDK_CONFIG_FUSE 00:06:42.825 #undef SPDK_CONFIG_FUZZER 00:06:42.825 #define SPDK_CONFIG_FUZZER_LIB 00:06:42.825 #undef SPDK_CONFIG_GOLANG 00:06:42.825 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:42.825 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:42.825 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:42.825 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:42.825 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:42.825 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:42.825 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:42.825 #define SPDK_CONFIG_IDXD 1 00:06:42.825 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:42.825 #undef SPDK_CONFIG_IPSEC_MB 00:06:42.825 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:42.825 #define SPDK_CONFIG_ISAL 1 00:06:42.825 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:42.825 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:42.825 #define SPDK_CONFIG_LIBDIR 00:06:42.825 #undef SPDK_CONFIG_LTO 00:06:42.825 #define SPDK_CONFIG_MAX_LCORES 128 00:06:42.825 #define SPDK_CONFIG_NVME_CUSE 1 00:06:42.825 #undef SPDK_CONFIG_OCF 00:06:42.825 #define SPDK_CONFIG_OCF_PATH 00:06:42.825 #define SPDK_CONFIG_OPENSSL_PATH 00:06:42.825 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:42.825 #define SPDK_CONFIG_PGO_DIR 00:06:42.825 #undef SPDK_CONFIG_PGO_USE 00:06:42.825 #define SPDK_CONFIG_PREFIX /usr/local 00:06:42.825 #undef SPDK_CONFIG_RAID5F 00:06:42.825 #undef SPDK_CONFIG_RBD 00:06:42.825 #define SPDK_CONFIG_RDMA 1 00:06:42.825 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:42.825 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:42.825 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:42.825 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:42.825 #define SPDK_CONFIG_SHARED 1 00:06:42.825 #undef SPDK_CONFIG_SMA 00:06:42.825 #define SPDK_CONFIG_TESTS 1 00:06:42.825 #undef SPDK_CONFIG_TSAN 00:06:42.825 #define SPDK_CONFIG_UBLK 1 00:06:42.825 #define SPDK_CONFIG_UBSAN 1 00:06:42.825 #undef SPDK_CONFIG_UNIT_TESTS 00:06:42.825 #undef SPDK_CONFIG_URING 00:06:42.825 #define SPDK_CONFIG_URING_PATH 00:06:42.825 #undef SPDK_CONFIG_URING_ZNS 00:06:42.825 #undef SPDK_CONFIG_USDT 00:06:42.825 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:42.825 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:42.825 #define SPDK_CONFIG_VFIO_USER 1 00:06:42.825 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:42.825 #define SPDK_CONFIG_VHOST 1 00:06:42.825 #define SPDK_CONFIG_VIRTIO 1 00:06:42.825 #undef SPDK_CONFIG_VTUNE 00:06:42.825 #define SPDK_CONFIG_VTUNE_DIR 00:06:42.825 #define SPDK_CONFIG_WERROR 1 00:06:42.825 #define SPDK_CONFIG_WPDK_DIR 00:06:42.825 #undef SPDK_CONFIG_XNVME 00:06:42.825 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:42.825 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:42.826 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1450842 ]] 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1450842 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.xMRoLn 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:43.087 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xMRoLn/tests/target /tmp/spdk.xMRoLn 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=187962236928 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8012062720 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986375680 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=774144 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:43.088 * Looking for test storage... 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=187962236928 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10226655232 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.088 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.089 19:12:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:48.357 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:48.357 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.357 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:48.358 Found net devices under 0000:86:00.0: cvl_0_0 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:48.358 Found net devices under 0000:86:00.1: cvl_0_1 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.358 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:06:48.616 00:06:48.616 --- 10.0.0.2 ping statistics --- 00:06:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.616 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:06:48.616 00:06:48.616 --- 10.0.0.1 ping statistics --- 00:06:48.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.616 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:06:48.616 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.617 ************************************ 00:06:48.617 START TEST nvmf_filesystem_no_in_capsule 00:06:48.617 ************************************ 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1454073 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1454073 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1454073 ']' 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.617 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.876 [2024-07-15 19:12:59.492501] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:06:48.876 [2024-07-15 19:12:59.492544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.876 [2024-07-15 19:12:59.522292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.876 [2024-07-15 19:12:59.550119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.876 [2024-07-15 19:12:59.592254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.876 [2024-07-15 19:12:59.592296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.876 [2024-07-15 19:12:59.592303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.876 [2024-07-15 19:12:59.592309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.876 [2024-07-15 19:12:59.592315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.876 [2024-07-15 19:12:59.592615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.876 [2024-07-15 19:12:59.592695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.876 [2024-07-15 19:12:59.592785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.876 [2024-07-15 19:12:59.592786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.876 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.876 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:48.876 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.876 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.876 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.134 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.134 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 [2024-07-15 19:12:59.742339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 [2024-07-15 19:12:59.895236] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:49.135 { 00:06:49.135 "name": "Malloc1", 00:06:49.135 "aliases": [ 00:06:49.135 "3737d00f-9536-43a4-8979-ae8864e4f9ba" 00:06:49.135 ], 00:06:49.135 "product_name": "Malloc disk", 00:06:49.135 "block_size": 512, 00:06:49.135 "num_blocks": 1048576, 00:06:49.135 "uuid": "3737d00f-9536-43a4-8979-ae8864e4f9ba", 00:06:49.135 "assigned_rate_limits": { 00:06:49.135 "rw_ios_per_sec": 0, 00:06:49.135 "rw_mbytes_per_sec": 0, 00:06:49.135 "r_mbytes_per_sec": 0, 00:06:49.135 "w_mbytes_per_sec": 0 00:06:49.135 }, 00:06:49.135 "claimed": true, 00:06:49.135 "claim_type": "exclusive_write", 00:06:49.135 "zoned": false, 00:06:49.135 "supported_io_types": { 00:06:49.135 "read": true, 00:06:49.135 "write": true, 00:06:49.135 "unmap": true, 00:06:49.135 "flush": true, 00:06:49.135 "reset": true, 00:06:49.135 "nvme_admin": false, 00:06:49.135 "nvme_io": false, 00:06:49.135 "nvme_io_md": false, 00:06:49.135 "write_zeroes": true, 00:06:49.135 "zcopy": true, 00:06:49.135 "get_zone_info": false, 00:06:49.135 "zone_management": false, 00:06:49.135 "zone_append": false, 00:06:49.135 "compare": false, 00:06:49.135 "compare_and_write": false, 00:06:49.135 "abort": true, 00:06:49.135 "seek_hole": false, 00:06:49.135 "seek_data": false, 00:06:49.135 "copy": true, 00:06:49.135 "nvme_iov_md": false 00:06:49.135 }, 00:06:49.135 "memory_domains": [ 00:06:49.135 { 00:06:49.135 "dma_device_id": "system", 00:06:49.135 "dma_device_type": 1 00:06:49.135 }, 00:06:49.135 { 00:06:49.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.135 "dma_device_type": 2 00:06:49.135 } 00:06:49.135 ], 00:06:49.135 "driver_specific": {} 00:06:49.135 } 00:06:49.135 ]' 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:49.135 19:12:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:49.394 19:13:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:49.394 19:13:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:49.394 19:13:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:49.394 19:13:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:49.394 19:13:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.767 19:13:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:50.767 19:13:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:50.767 19:13:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:50.767 19:13:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:50.767 19:13:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:52.667 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:53.235 19:13:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.172 ************************************ 00:06:54.172 START TEST filesystem_ext4 00:06:54.172 ************************************ 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:54.172 19:13:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.172 mke2fs 1.46.5 (30-Dec-2021) 00:06:54.172 Discarding device blocks: 0/522240 done 00:06:54.172 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.172 Filesystem UUID: de659a57-fd29-4664-853a-b51152f57c49 00:06:54.172 Superblock backups stored on blocks: 00:06:54.172 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.172 00:06:54.172 Allocating group tables: 0/64 done 00:06:54.172 Writing inode tables: 0/64 done 00:06:54.431 Creating journal (8192 blocks): done 00:06:55.369 Writing superblocks and filesystem accounting information: 0/64 done 00:06:55.369 00:06:55.369 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:55.369 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1454073 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.627 00:06:55.627 real 0m1.500s 00:06:55.627 user 0m0.025s 00:06:55.627 sys 0m0.067s 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:55.627 ************************************ 00:06:55.627 END TEST filesystem_ext4 00:06:55.627 ************************************ 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.627 ************************************ 00:06:55.627 START TEST filesystem_btrfs 00:06:55.627 ************************************ 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:55.627 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:56.195 btrfs-progs v6.6.2 00:06:56.195 See https://btrfs.readthedocs.io for more information. 00:06:56.195 00:06:56.195 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:56.195 NOTE: several default settings have changed in version 5.15, please make sure 00:06:56.195 this does not affect your deployments: 00:06:56.195 - DUP for metadata (-m dup) 00:06:56.195 - enabled no-holes (-O no-holes) 00:06:56.195 - enabled free-space-tree (-R free-space-tree) 00:06:56.195 00:06:56.195 Label: (null) 00:06:56.195 UUID: a55e5073-c30d-41a3-b1eb-b3cd0741d72a 00:06:56.195 Node size: 16384 00:06:56.195 Sector size: 4096 00:06:56.195 Filesystem size: 510.00MiB 00:06:56.195 Block group profiles: 00:06:56.195 Data: single 8.00MiB 00:06:56.195 Metadata: DUP 32.00MiB 00:06:56.195 System: DUP 8.00MiB 00:06:56.195 SSD detected: yes 00:06:56.195 Zoned device: no 00:06:56.195 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:56.195 Runtime features: free-space-tree 00:06:56.195 Checksum: crc32c 00:06:56.195 Number of devices: 1 00:06:56.195 Devices: 00:06:56.195 ID SIZE PATH 00:06:56.195 1 510.00MiB /dev/nvme0n1p1 00:06:56.195 00:06:56.195 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:56.195 19:13:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1454073 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.841 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.100 00:06:57.100 real 0m1.289s 00:06:57.100 user 0m0.021s 00:06:57.100 sys 0m0.131s 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:57.100 ************************************ 00:06:57.100 END TEST filesystem_btrfs 00:06:57.100 ************************************ 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:57.100 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.101 ************************************ 00:06:57.101 START TEST filesystem_xfs 00:06:57.101 ************************************ 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:57.101 19:13:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:57.101 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:57.101 = sectsz=512 attr=2, projid32bit=1 00:06:57.101 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:57.101 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:57.101 data = bsize=4096 blocks=130560, imaxpct=25 00:06:57.101 = sunit=0 swidth=0 blks 00:06:57.101 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:57.101 log =internal log bsize=4096 blocks=16384, version=2 00:06:57.101 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:57.101 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:58.038 Discarding blocks...Done. 00:06:58.038 19:13:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:58.038 19:13:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1454073 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.574 00:07:00.574 real 0m3.192s 00:07:00.574 user 0m0.024s 00:07:00.574 sys 0m0.071s 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.574 19:13:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:00.574 ************************************ 00:07:00.574 END TEST filesystem_xfs 00:07:00.574 ************************************ 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:00.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.574 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1454073 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1454073 ']' 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1454073 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1454073 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1454073' 00:07:00.833 killing process with pid 1454073 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1454073 00:07:00.833 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1454073 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:01.093 00:07:01.093 real 0m12.377s 00:07:01.093 user 0m48.553s 00:07:01.093 sys 0m1.240s 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.093 ************************************ 00:07:01.093 END TEST nvmf_filesystem_no_in_capsule 00:07:01.093 ************************************ 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.093 ************************************ 00:07:01.093 START TEST nvmf_filesystem_in_capsule 00:07:01.093 ************************************ 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1456289 00:07:01.093 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1456289 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1456289 ']' 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.094 19:13:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.094 [2024-07-15 19:13:11.939515] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:01.094 [2024-07-15 19:13:11.939552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.353 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.353 [2024-07-15 19:13:11.969409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.354 [2024-07-15 19:13:11.998674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.354 [2024-07-15 19:13:12.040561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.354 [2024-07-15 19:13:12.040602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.354 [2024-07-15 19:13:12.040609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.354 [2024-07-15 19:13:12.040614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.354 [2024-07-15 19:13:12.040619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.354 [2024-07-15 19:13:12.040672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.354 [2024-07-15 19:13:12.040770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.354 [2024-07-15 19:13:12.040854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.354 [2024-07-15 19:13:12.040855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.354 [2024-07-15 19:13:12.176327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.354 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.613 Malloc1 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.613 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.614 [2024-07-15 19:13:12.321773] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:01.614 { 00:07:01.614 "name": "Malloc1", 00:07:01.614 "aliases": [ 00:07:01.614 "287a1180-de04-4df5-a71e-358d3c6a2d77" 00:07:01.614 ], 00:07:01.614 "product_name": "Malloc disk", 00:07:01.614 "block_size": 512, 00:07:01.614 "num_blocks": 1048576, 00:07:01.614 "uuid": "287a1180-de04-4df5-a71e-358d3c6a2d77", 00:07:01.614 "assigned_rate_limits": { 00:07:01.614 "rw_ios_per_sec": 0, 00:07:01.614 "rw_mbytes_per_sec": 0, 00:07:01.614 "r_mbytes_per_sec": 0, 00:07:01.614 "w_mbytes_per_sec": 0 00:07:01.614 }, 00:07:01.614 "claimed": true, 00:07:01.614 "claim_type": "exclusive_write", 00:07:01.614 "zoned": false, 00:07:01.614 "supported_io_types": { 00:07:01.614 "read": true, 00:07:01.614 "write": true, 00:07:01.614 "unmap": true, 00:07:01.614 "flush": true, 00:07:01.614 "reset": true, 00:07:01.614 "nvme_admin": false, 00:07:01.614 "nvme_io": false, 00:07:01.614 "nvme_io_md": false, 00:07:01.614 "write_zeroes": true, 00:07:01.614 "zcopy": true, 00:07:01.614 "get_zone_info": false, 00:07:01.614 "zone_management": false, 00:07:01.614 "zone_append": false, 00:07:01.614 "compare": false, 00:07:01.614 "compare_and_write": false, 00:07:01.614 "abort": true, 00:07:01.614 "seek_hole": false, 00:07:01.614 "seek_data": false, 00:07:01.614 "copy": true, 00:07:01.614 "nvme_iov_md": false 00:07:01.614 }, 00:07:01.614 "memory_domains": [ 00:07:01.614 { 00:07:01.614 "dma_device_id": "system", 00:07:01.614 "dma_device_type": 1 00:07:01.614 }, 00:07:01.614 { 00:07:01.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.614 "dma_device_type": 2 00:07:01.614 } 00:07:01.614 ], 00:07:01.614 "driver_specific": {} 00:07:01.614 } 00:07:01.614 ]' 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:01.614 19:13:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.993 19:13:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.993 19:13:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:02.993 19:13:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.993 19:13:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:02.993 19:13:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:04.896 19:13:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:05.465 19:13:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:05.465 19:13:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.424 ************************************ 00:07:06.424 START TEST filesystem_in_capsule_ext4 00:07:06.424 ************************************ 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:06.424 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:06.425 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:06.425 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:06.425 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:06.425 mke2fs 1.46.5 (30-Dec-2021) 00:07:06.684 Discarding device blocks: 0/522240 done 00:07:06.684 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:06.684 Filesystem UUID: 6b6aa645-38cd-462a-9b7e-40b2f111b3a0 00:07:06.684 Superblock backups stored on blocks: 00:07:06.684 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:06.684 00:07:06.684 Allocating group tables: 0/64 done 00:07:06.684 Writing inode tables: 0/64 done 00:07:06.684 Creating journal (8192 blocks): done 00:07:06.684 Writing superblocks and filesystem accounting information: 0/64 done 00:07:06.684 00:07:06.684 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:06.684 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:06.943 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1456289 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.202 00:07:07.202 real 0m0.602s 00:07:07.202 user 0m0.018s 00:07:07.202 sys 0m0.071s 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.202 ************************************ 00:07:07.202 END TEST filesystem_in_capsule_ext4 00:07:07.202 ************************************ 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.202 ************************************ 00:07:07.202 START TEST filesystem_in_capsule_btrfs 00:07:07.202 ************************************ 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.202 19:13:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.462 btrfs-progs v6.6.2 00:07:07.462 See https://btrfs.readthedocs.io for more information. 00:07:07.462 00:07:07.462 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.462 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.462 this does not affect your deployments: 00:07:07.462 - DUP for metadata (-m dup) 00:07:07.462 - enabled no-holes (-O no-holes) 00:07:07.462 - enabled free-space-tree (-R free-space-tree) 00:07:07.462 00:07:07.462 Label: (null) 00:07:07.462 UUID: 362e2b1c-c65b-4d9f-b118-2d35ee83752d 00:07:07.462 Node size: 16384 00:07:07.462 Sector size: 4096 00:07:07.462 Filesystem size: 510.00MiB 00:07:07.462 Block group profiles: 00:07:07.462 Data: single 8.00MiB 00:07:07.462 Metadata: DUP 32.00MiB 00:07:07.462 System: DUP 8.00MiB 00:07:07.462 SSD detected: yes 00:07:07.462 Zoned device: no 00:07:07.462 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.462 Runtime features: free-space-tree 00:07:07.462 Checksum: crc32c 00:07:07.462 Number of devices: 1 00:07:07.462 Devices: 00:07:07.462 ID SIZE PATH 00:07:07.462 1 510.00MiB /dev/nvme0n1p1 00:07:07.462 00:07:07.462 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:07.462 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1456289 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.722 00:07:07.722 real 0m0.516s 00:07:07.722 user 0m0.033s 00:07:07.722 sys 0m0.114s 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 END TEST filesystem_in_capsule_btrfs 00:07:07.722 ************************************ 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.722 ************************************ 00:07:07.722 START TEST filesystem_in_capsule_xfs 00:07:07.722 ************************************ 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.722 19:13:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.981 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.981 = sectsz=512 attr=2, projid32bit=1 00:07:07.981 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.981 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.981 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.981 = sunit=0 swidth=0 blks 00:07:07.981 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.981 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.981 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.981 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.550 Discarding blocks...Done. 00:07:08.550 19:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.550 19:13:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1456289 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.454 00:07:10.454 real 0m2.695s 00:07:10.454 user 0m0.018s 00:07:10.454 sys 0m0.078s 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:10.454 ************************************ 00:07:10.454 END TEST filesystem_in_capsule_xfs 00:07:10.454 ************************************ 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:10.454 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1456289 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1456289 ']' 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1456289 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1456289 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1456289' 00:07:11.021 killing process with pid 1456289 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1456289 00:07:11.021 19:13:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1456289 00:07:11.280 19:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.280 00:07:11.280 real 0m10.248s 00:07:11.280 user 0m40.138s 00:07:11.280 sys 0m1.159s 00:07:11.280 19:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.280 19:13:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.280 ************************************ 00:07:11.280 END TEST nvmf_filesystem_in_capsule 00:07:11.280 ************************************ 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.538 rmmod nvme_tcp 00:07:11.538 rmmod nvme_fabrics 00:07:11.538 rmmod nvme_keyring 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.538 19:13:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.539 19:13:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.539 19:13:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.070 19:13:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.071 00:07:14.071 real 0m30.803s 00:07:14.071 user 1m30.431s 00:07:14.071 sys 0m6.831s 00:07:14.071 19:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.071 19:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:14.071 ************************************ 00:07:14.071 END TEST nvmf_filesystem 00:07:14.071 ************************************ 00:07:14.071 19:13:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.071 19:13:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:14.071 19:13:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.071 19:13:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.071 19:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.071 ************************************ 00:07:14.071 START TEST nvmf_target_discovery 00:07:14.071 ************************************ 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:14.071 * Looking for test storage... 00:07:14.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.071 19:13:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:19.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:19.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:19.390 Found net devices under 0000:86:00.0: cvl_0_0 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.390 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:19.391 Found net devices under 0000:86:00.1: cvl_0_1 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:07:19.391 00:07:19.391 --- 10.0.0.2 ping statistics --- 00:07:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.391 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:07:19.391 00:07:19.391 --- 10.0.0.1 ping statistics --- 00:07:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.391 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1461523 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1461523 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1461523 ']' 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.391 19:13:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 [2024-07-15 19:13:29.861835] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:19.391 [2024-07-15 19:13:29.861882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.391 [2024-07-15 19:13:29.892459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.391 [2024-07-15 19:13:29.920076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.391 [2024-07-15 19:13:29.962576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.391 [2024-07-15 19:13:29.962614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.391 [2024-07-15 19:13:29.962621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.391 [2024-07-15 19:13:29.962627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.391 [2024-07-15 19:13:29.962632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.391 [2024-07-15 19:13:29.962675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.391 [2024-07-15 19:13:29.962764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.391 [2024-07-15 19:13:29.962863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.391 [2024-07-15 19:13:29.962864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 [2024-07-15 19:13:30.111408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 Null1 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 [2024-07-15 19:13:30.156942] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 Null2 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.391 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 Null3 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 Null4 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.392 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.652 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:19.652 00:07:19.652 Discovery Log Number of Records 6, Generation counter 6 00:07:19.652 =====Discovery Log Entry 0====== 00:07:19.652 trtype: tcp 00:07:19.652 adrfam: ipv4 00:07:19.652 subtype: current discovery subsystem 00:07:19.652 treq: not required 00:07:19.652 portid: 0 00:07:19.652 trsvcid: 4420 00:07:19.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:19.652 traddr: 10.0.0.2 00:07:19.652 eflags: explicit discovery connections, duplicate discovery information 00:07:19.652 sectype: none 00:07:19.652 =====Discovery Log Entry 1====== 00:07:19.652 trtype: tcp 00:07:19.652 adrfam: ipv4 00:07:19.652 subtype: nvme subsystem 00:07:19.652 treq: not required 00:07:19.652 portid: 0 00:07:19.652 trsvcid: 4420 00:07:19.652 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:19.652 traddr: 10.0.0.2 00:07:19.652 eflags: none 00:07:19.652 sectype: none 00:07:19.652 =====Discovery Log Entry 2====== 00:07:19.652 trtype: tcp 00:07:19.652 adrfam: ipv4 00:07:19.652 subtype: nvme subsystem 00:07:19.652 treq: not required 00:07:19.652 portid: 0 00:07:19.652 trsvcid: 4420 00:07:19.652 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:19.652 traddr: 10.0.0.2 00:07:19.652 eflags: none 00:07:19.652 sectype: none 00:07:19.652 =====Discovery Log Entry 3====== 00:07:19.652 trtype: tcp 00:07:19.652 adrfam: ipv4 00:07:19.652 subtype: nvme subsystem 00:07:19.652 treq: not required 00:07:19.652 portid: 0 00:07:19.652 trsvcid: 4420 00:07:19.652 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:19.652 traddr: 10.0.0.2 00:07:19.652 eflags: none 00:07:19.652 sectype: none 00:07:19.652 =====Discovery Log Entry 4====== 00:07:19.652 trtype: tcp 00:07:19.652 adrfam: ipv4 00:07:19.652 subtype: nvme subsystem 00:07:19.652 treq: not required 00:07:19.652 portid: 0 00:07:19.652 trsvcid: 4420 00:07:19.652 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:19.652 traddr: 10.0.0.2 00:07:19.653 eflags: none 00:07:19.653 sectype: none 00:07:19.653 =====Discovery Log Entry 5====== 00:07:19.653 trtype: tcp 00:07:19.653 adrfam: ipv4 00:07:19.653 subtype: discovery subsystem referral 00:07:19.653 treq: not required 00:07:19.653 portid: 0 00:07:19.653 trsvcid: 4430 00:07:19.653 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:19.653 traddr: 10.0.0.2 00:07:19.653 eflags: none 00:07:19.653 sectype: none 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:19.653 Perform nvmf subsystem discovery via RPC 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 [ 00:07:19.653 { 00:07:19.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:19.653 "subtype": "Discovery", 00:07:19.653 "listen_addresses": [ 00:07:19.653 { 00:07:19.653 "trtype": "TCP", 00:07:19.653 "adrfam": "IPv4", 00:07:19.653 "traddr": "10.0.0.2", 00:07:19.653 "trsvcid": "4420" 00:07:19.653 } 00:07:19.653 ], 00:07:19.653 "allow_any_host": true, 00:07:19.653 "hosts": [] 00:07:19.653 }, 00:07:19.653 { 00:07:19.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:19.653 "subtype": "NVMe", 00:07:19.653 "listen_addresses": [ 00:07:19.653 { 00:07:19.653 "trtype": "TCP", 00:07:19.653 "adrfam": "IPv4", 00:07:19.653 "traddr": "10.0.0.2", 00:07:19.653 "trsvcid": "4420" 00:07:19.653 } 00:07:19.653 ], 00:07:19.653 "allow_any_host": true, 00:07:19.653 "hosts": [], 00:07:19.653 "serial_number": "SPDK00000000000001", 00:07:19.653 "model_number": "SPDK bdev Controller", 00:07:19.653 "max_namespaces": 32, 00:07:19.653 "min_cntlid": 1, 00:07:19.653 "max_cntlid": 65519, 00:07:19.653 "namespaces": [ 00:07:19.653 { 00:07:19.653 "nsid": 1, 00:07:19.653 "bdev_name": "Null1", 00:07:19.653 "name": "Null1", 00:07:19.653 "nguid": "DD47ED5F90444B29BA93A87C8324B2E2", 00:07:19.653 "uuid": "dd47ed5f-9044-4b29-ba93-a87c8324b2e2" 00:07:19.653 } 00:07:19.653 ] 00:07:19.653 }, 00:07:19.653 { 00:07:19.653 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:19.653 "subtype": "NVMe", 00:07:19.653 "listen_addresses": [ 00:07:19.653 { 00:07:19.653 "trtype": "TCP", 00:07:19.653 "adrfam": "IPv4", 00:07:19.653 "traddr": "10.0.0.2", 00:07:19.653 "trsvcid": "4420" 00:07:19.653 } 00:07:19.653 ], 00:07:19.653 "allow_any_host": true, 00:07:19.653 "hosts": [], 00:07:19.653 "serial_number": "SPDK00000000000002", 00:07:19.653 "model_number": "SPDK bdev Controller", 00:07:19.653 "max_namespaces": 32, 00:07:19.653 "min_cntlid": 1, 00:07:19.653 "max_cntlid": 65519, 00:07:19.653 "namespaces": [ 00:07:19.653 { 00:07:19.653 "nsid": 1, 00:07:19.653 "bdev_name": "Null2", 00:07:19.653 "name": "Null2", 00:07:19.653 "nguid": "28DC31A24BFA48F7B547C58A27CD6288", 00:07:19.653 "uuid": "28dc31a2-4bfa-48f7-b547-c58a27cd6288" 00:07:19.653 } 00:07:19.653 ] 00:07:19.653 }, 00:07:19.653 { 00:07:19.653 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:19.653 "subtype": "NVMe", 00:07:19.653 "listen_addresses": [ 00:07:19.653 { 00:07:19.653 "trtype": "TCP", 00:07:19.653 "adrfam": "IPv4", 00:07:19.653 "traddr": "10.0.0.2", 00:07:19.653 "trsvcid": "4420" 00:07:19.653 } 00:07:19.653 ], 00:07:19.653 "allow_any_host": true, 00:07:19.653 "hosts": [], 00:07:19.653 "serial_number": "SPDK00000000000003", 00:07:19.653 "model_number": "SPDK bdev Controller", 00:07:19.653 "max_namespaces": 32, 00:07:19.653 "min_cntlid": 1, 00:07:19.653 "max_cntlid": 65519, 00:07:19.653 "namespaces": [ 00:07:19.653 { 00:07:19.653 "nsid": 1, 00:07:19.653 "bdev_name": "Null3", 00:07:19.653 "name": "Null3", 00:07:19.653 "nguid": "A234BCBD45F44E349E8B1E2D44565F0C", 00:07:19.653 "uuid": "a234bcbd-45f4-4e34-9e8b-1e2d44565f0c" 00:07:19.653 } 00:07:19.653 ] 00:07:19.653 }, 00:07:19.653 { 00:07:19.653 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:19.653 "subtype": "NVMe", 00:07:19.653 "listen_addresses": [ 00:07:19.653 { 00:07:19.653 "trtype": "TCP", 00:07:19.653 "adrfam": "IPv4", 00:07:19.653 "traddr": "10.0.0.2", 00:07:19.653 "trsvcid": "4420" 00:07:19.653 } 00:07:19.653 ], 00:07:19.653 "allow_any_host": true, 00:07:19.653 "hosts": [], 00:07:19.653 "serial_number": "SPDK00000000000004", 00:07:19.653 "model_number": "SPDK bdev Controller", 00:07:19.653 "max_namespaces": 32, 00:07:19.653 "min_cntlid": 1, 00:07:19.653 "max_cntlid": 65519, 00:07:19.653 "namespaces": [ 00:07:19.653 { 00:07:19.653 "nsid": 1, 00:07:19.653 "bdev_name": "Null4", 00:07:19.653 "name": "Null4", 00:07:19.653 "nguid": "D65C608615AE48498FC3113E293DA7F5", 00:07:19.653 "uuid": "d65c6086-15ae-4849-8fc3-113e293da7f5" 00:07:19.653 } 00:07:19.653 ] 00:07:19.653 } 00:07:19.653 ] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.653 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.654 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:19.654 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:19.654 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.913 rmmod nvme_tcp 00:07:19.913 rmmod nvme_fabrics 00:07:19.913 rmmod nvme_keyring 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1461523 ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1461523 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1461523 ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1461523 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1461523 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1461523' 00:07:19.913 killing process with pid 1461523 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1461523 00:07:19.913 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1461523 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.175 19:13:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.081 19:13:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.081 00:07:22.081 real 0m8.519s 00:07:22.081 user 0m4.953s 00:07:22.081 sys 0m4.326s 00:07:22.081 19:13:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.081 19:13:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.081 ************************************ 00:07:22.081 END TEST nvmf_target_discovery 00:07:22.081 ************************************ 00:07:22.081 19:13:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.081 19:13:32 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.081 19:13:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.081 19:13:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.081 19:13:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.341 ************************************ 00:07:22.341 START TEST nvmf_referrals 00:07:22.341 ************************************ 00:07:22.341 19:13:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.341 * Looking for test storage... 00:07:22.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.341 19:13:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.631 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:27.632 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:27.632 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:27.632 Found net devices under 0000:86:00.0: cvl_0_0 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:27.632 Found net devices under 0000:86:00.1: cvl_0_1 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:07:27.632 00:07:27.632 --- 10.0.0.2 ping statistics --- 00:07:27.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.632 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:07:27.632 19:13:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:27.632 00:07:27.632 --- 10.0.0.1 ping statistics --- 00:07:27.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.632 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1465061 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1465061 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1465061 ']' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 [2024-07-15 19:13:38.091642] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:27.632 [2024-07-15 19:13:38.091685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.632 [2024-07-15 19:13:38.121704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.632 [2024-07-15 19:13:38.148818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.632 [2024-07-15 19:13:38.190759] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.632 [2024-07-15 19:13:38.190798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.632 [2024-07-15 19:13:38.190809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.632 [2024-07-15 19:13:38.190814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.632 [2024-07-15 19:13:38.190819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.632 [2024-07-15 19:13:38.190883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.632 [2024-07-15 19:13:38.190978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.632 [2024-07-15 19:13:38.191062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.632 [2024-07-15 19:13:38.191063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 [2024-07-15 19:13:38.334309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 [2024-07-15 19:13:38.347725] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.632 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.890 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.148 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.149 19:13:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.407 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.665 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.924 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.182 rmmod nvme_tcp 00:07:29.182 rmmod nvme_fabrics 00:07:29.182 rmmod nvme_keyring 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1465061 ']' 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1465061 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1465061 ']' 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1465061 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465061 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465061' 00:07:29.182 killing process with pid 1465061 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1465061 00:07:29.182 19:13:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1465061 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.441 19:13:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.970 19:13:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:31.970 00:07:31.971 real 0m9.251s 00:07:31.971 user 0m9.694s 00:07:31.971 sys 0m4.412s 00:07:31.971 19:13:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.971 19:13:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.971 ************************************ 00:07:31.971 END TEST nvmf_referrals 00:07:31.971 ************************************ 00:07:31.971 19:13:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:31.971 19:13:42 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.971 19:13:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.971 19:13:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.971 19:13:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.971 ************************************ 00:07:31.971 START TEST nvmf_connect_disconnect 00:07:31.971 ************************************ 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:31.971 * Looking for test storage... 00:07:31.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.971 19:13:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:37.242 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:37.242 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:37.242 Found net devices under 0000:86:00.0: cvl_0_0 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:37.242 Found net devices under 0000:86:00.1: cvl_0_1 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.242 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:37.243 00:07:37.243 --- 10.0.0.2 ping statistics --- 00:07:37.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.243 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:37.243 00:07:37.243 --- 10.0.0.1 ping statistics --- 00:07:37.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.243 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1468902 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1468902 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1468902 ']' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 [2024-07-15 19:13:47.663628] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:07:37.243 [2024-07-15 19:13:47.663677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.243 [2024-07-15 19:13:47.694380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.243 [2024-07-15 19:13:47.721795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.243 [2024-07-15 19:13:47.765966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.243 [2024-07-15 19:13:47.766003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.243 [2024-07-15 19:13:47.766010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.243 [2024-07-15 19:13:47.766016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.243 [2024-07-15 19:13:47.766021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.243 [2024-07-15 19:13:47.766070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.243 [2024-07-15 19:13:47.766165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.243 [2024-07-15 19:13:47.766254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.243 [2024-07-15 19:13:47.766255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 [2024-07-15 19:13:47.910327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 [2024-07-15 19:13:47.962132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:37.243 19:13:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:39.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.484 rmmod nvme_tcp 00:11:27.484 rmmod nvme_fabrics 00:11:27.484 rmmod nvme_keyring 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1468902 ']' 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1468902 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1468902 ']' 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1468902 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.484 19:17:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468902 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468902' 00:11:27.484 killing process with pid 1468902 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1468902 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1468902 00:11:27.484 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.485 19:17:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.020 19:17:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.020 00:11:30.020 real 3m58.007s 00:11:30.020 user 15m14.300s 00:11:30.020 sys 0m20.054s 00:11:30.020 19:17:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.020 19:17:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.020 ************************************ 00:11:30.020 END TEST nvmf_connect_disconnect 00:11:30.020 ************************************ 00:11:30.020 19:17:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:30.020 19:17:40 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:30.020 19:17:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.020 19:17:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.020 19:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.020 ************************************ 00:11:30.020 START TEST nvmf_multitarget 00:11:30.020 ************************************ 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:30.021 * Looking for test storage... 00:11:30.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.021 19:17:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.298 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.298 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.298 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.298 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.298 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:11:35.299 00:11:35.299 --- 10.0.0.2 ping statistics --- 00:11:35.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.299 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:35.299 00:11:35.299 --- 10.0.0.1 ping statistics --- 00:11:35.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.299 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1512252 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1512252 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1512252 ']' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.299 [2024-07-15 19:17:45.451799] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:11:35.299 [2024-07-15 19:17:45.451849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.299 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.299 [2024-07-15 19:17:45.482467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:35.299 [2024-07-15 19:17:45.509388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.299 [2024-07-15 19:17:45.553451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.299 [2024-07-15 19:17:45.553502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.299 [2024-07-15 19:17:45.553509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.299 [2024-07-15 19:17:45.553515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.299 [2024-07-15 19:17:45.553520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.299 [2024-07-15 19:17:45.553620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.299 [2024-07-15 19:17:45.553740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.299 [2024-07-15 19:17:45.553801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.299 [2024-07-15 19:17:45.553802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:35.299 "nvmf_tgt_1" 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:35.299 "nvmf_tgt_2" 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.299 19:17:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:35.299 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:35.299 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:35.559 true 00:11:35.559 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:35.559 true 00:11:35.559 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.559 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:35.819 rmmod nvme_tcp 00:11:35.819 rmmod nvme_fabrics 00:11:35.819 rmmod nvme_keyring 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1512252 ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1512252 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1512252 ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1512252 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1512252 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1512252' 00:11:35.819 killing process with pid 1512252 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1512252 00:11:35.819 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1512252 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.079 19:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.987 19:17:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.987 00:11:37.987 real 0m8.410s 00:11:37.987 user 0m6.427s 00:11:37.987 sys 0m4.124s 00:11:37.987 19:17:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.987 19:17:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:37.987 ************************************ 00:11:37.987 END TEST nvmf_multitarget 00:11:37.987 ************************************ 00:11:37.987 19:17:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.987 19:17:48 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:37.987 19:17:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.987 19:17:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.987 19:17:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.987 ************************************ 00:11:37.987 START TEST nvmf_rpc 00:11:37.987 ************************************ 00:11:37.987 19:17:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.247 * Looking for test storage... 00:11:38.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.247 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.248 19:17:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.527 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:43.528 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:43.528 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:43.528 Found net devices under 0000:86:00.0: cvl_0_0 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:43.528 Found net devices under 0000:86:00.1: cvl_0_1 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.528 19:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:43.528 00:11:43.528 --- 10.0.0.2 ping statistics --- 00:11:43.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.528 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:43.528 00:11:43.528 --- 10.0.0.1 ping statistics --- 00:11:43.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.528 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1515844 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1515844 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1515844 ']' 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.528 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.528 [2024-07-15 19:17:54.318680] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:11:43.528 [2024-07-15 19:17:54.318724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.528 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.528 [2024-07-15 19:17:54.348888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:43.528 [2024-07-15 19:17:54.377294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.787 [2024-07-15 19:17:54.420207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.788 [2024-07-15 19:17:54.420250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.788 [2024-07-15 19:17:54.420258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.788 [2024-07-15 19:17:54.420264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.788 [2024-07-15 19:17:54.420270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.788 [2024-07-15 19:17:54.420311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.788 [2024-07-15 19:17:54.420408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.788 [2024-07-15 19:17:54.420493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.788 [2024-07-15 19:17:54.420494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:43.788 "tick_rate": 2300000000, 00:11:43.788 "poll_groups": [ 00:11:43.788 { 00:11:43.788 "name": "nvmf_tgt_poll_group_000", 00:11:43.788 "admin_qpairs": 0, 00:11:43.788 "io_qpairs": 0, 00:11:43.788 "current_admin_qpairs": 0, 00:11:43.788 "current_io_qpairs": 0, 00:11:43.788 "pending_bdev_io": 0, 00:11:43.788 "completed_nvme_io": 0, 00:11:43.788 "transports": [] 00:11:43.788 }, 00:11:43.788 { 00:11:43.788 "name": "nvmf_tgt_poll_group_001", 00:11:43.788 "admin_qpairs": 0, 00:11:43.788 "io_qpairs": 0, 00:11:43.788 "current_admin_qpairs": 0, 00:11:43.788 "current_io_qpairs": 0, 00:11:43.788 "pending_bdev_io": 0, 00:11:43.788 "completed_nvme_io": 0, 00:11:43.788 "transports": [] 00:11:43.788 }, 00:11:43.788 { 00:11:43.788 "name": "nvmf_tgt_poll_group_002", 00:11:43.788 "admin_qpairs": 0, 00:11:43.788 "io_qpairs": 0, 00:11:43.788 "current_admin_qpairs": 0, 00:11:43.788 "current_io_qpairs": 0, 00:11:43.788 "pending_bdev_io": 0, 00:11:43.788 "completed_nvme_io": 0, 00:11:43.788 "transports": [] 00:11:43.788 }, 00:11:43.788 { 00:11:43.788 "name": "nvmf_tgt_poll_group_003", 00:11:43.788 "admin_qpairs": 0, 00:11:43.788 "io_qpairs": 0, 00:11:43.788 "current_admin_qpairs": 0, 00:11:43.788 "current_io_qpairs": 0, 00:11:43.788 "pending_bdev_io": 0, 00:11:43.788 "completed_nvme_io": 0, 00:11:43.788 "transports": [] 00:11:43.788 } 00:11:43.788 ] 00:11:43.788 }' 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:43.788 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 [2024-07-15 19:17:54.668483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:44.047 "tick_rate": 2300000000, 00:11:44.047 "poll_groups": [ 00:11:44.047 { 00:11:44.047 "name": "nvmf_tgt_poll_group_000", 00:11:44.047 "admin_qpairs": 0, 00:11:44.047 "io_qpairs": 0, 00:11:44.047 "current_admin_qpairs": 0, 00:11:44.047 "current_io_qpairs": 0, 00:11:44.047 "pending_bdev_io": 0, 00:11:44.047 "completed_nvme_io": 0, 00:11:44.047 "transports": [ 00:11:44.047 { 00:11:44.047 "trtype": "TCP" 00:11:44.047 } 00:11:44.047 ] 00:11:44.047 }, 00:11:44.047 { 00:11:44.047 "name": "nvmf_tgt_poll_group_001", 00:11:44.047 "admin_qpairs": 0, 00:11:44.047 "io_qpairs": 0, 00:11:44.047 "current_admin_qpairs": 0, 00:11:44.047 "current_io_qpairs": 0, 00:11:44.047 "pending_bdev_io": 0, 00:11:44.047 "completed_nvme_io": 0, 00:11:44.047 "transports": [ 00:11:44.047 { 00:11:44.047 "trtype": "TCP" 00:11:44.047 } 00:11:44.047 ] 00:11:44.047 }, 00:11:44.047 { 00:11:44.047 "name": "nvmf_tgt_poll_group_002", 00:11:44.047 "admin_qpairs": 0, 00:11:44.047 "io_qpairs": 0, 00:11:44.047 "current_admin_qpairs": 0, 00:11:44.047 "current_io_qpairs": 0, 00:11:44.047 "pending_bdev_io": 0, 00:11:44.047 "completed_nvme_io": 0, 00:11:44.047 "transports": [ 00:11:44.047 { 00:11:44.047 "trtype": "TCP" 00:11:44.047 } 00:11:44.047 ] 00:11:44.047 }, 00:11:44.047 { 00:11:44.047 "name": "nvmf_tgt_poll_group_003", 00:11:44.047 "admin_qpairs": 0, 00:11:44.047 "io_qpairs": 0, 00:11:44.047 "current_admin_qpairs": 0, 00:11:44.047 "current_io_qpairs": 0, 00:11:44.047 "pending_bdev_io": 0, 00:11:44.047 "completed_nvme_io": 0, 00:11:44.047 "transports": [ 00:11:44.047 { 00:11:44.047 "trtype": "TCP" 00:11:44.047 } 00:11:44.047 ] 00:11:44.047 } 00:11:44.047 ] 00:11:44.047 }' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 Malloc1 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.047 [2024-07-15 19:17:54.840496] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:44.047 [2024-07-15 19:17:54.864986] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:44.047 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:44.047 could not add new controller: failed to write to nvme-fabrics device 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.047 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.048 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.048 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.048 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.048 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.048 19:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.305 19:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.242 19:17:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.242 19:17:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.242 19:17:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.242 19:17:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.242 19:17:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.154 19:17:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.154 19:17:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.154 19:17:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.154 19:17:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.154 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.154 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:47.154 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.412 [2024-07-15 19:17:58.160937] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:47.412 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:47.412 could not add new controller: failed to write to nvme-fabrics device 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.412 19:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.788 19:17:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.788 19:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.788 19:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.788 19:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.788 19:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 [2024-07-15 19:18:01.503199] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.694 19:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.071 19:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.071 19:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.071 19:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.071 19:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.071 19:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.971 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.230 [2024-07-15 19:18:04.833239] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.230 19:18:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.167 19:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.167 19:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.167 19:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.167 19:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.167 19:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:57.699 19:18:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 [2024-07-15 19:18:08.103729] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.699 19:18:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.700 19:18:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.636 19:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.636 19:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.636 19:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.636 19:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:58.636 19:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:00.540 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 [2024-07-15 19:18:11.499674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.800 19:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.176 19:18:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.177 19:18:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.177 19:18:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.177 19:18:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.177 19:18:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 [2024-07-15 19:18:14.777192] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.079 19:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.455 19:18:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.455 19:18:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.455 19:18:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.455 19:18:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:05.455 19:18:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.405 19:18:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.405 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.405 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.405 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 [2024-07-15 19:18:18.063875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 [2024-07-15 19:18:18.111965] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 [2024-07-15 19:18:18.164137] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 [2024-07-15 19:18:18.212306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.406 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 [2024-07-15 19:18:18.260475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.665 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:07.666 "tick_rate": 2300000000, 00:12:07.666 "poll_groups": [ 00:12:07.666 { 00:12:07.666 "name": "nvmf_tgt_poll_group_000", 00:12:07.666 "admin_qpairs": 2, 00:12:07.666 "io_qpairs": 168, 00:12:07.666 "current_admin_qpairs": 0, 00:12:07.666 "current_io_qpairs": 0, 00:12:07.666 "pending_bdev_io": 0, 00:12:07.666 "completed_nvme_io": 270, 00:12:07.666 "transports": [ 00:12:07.666 { 00:12:07.666 "trtype": "TCP" 00:12:07.666 } 00:12:07.666 ] 00:12:07.666 }, 00:12:07.666 { 00:12:07.666 "name": "nvmf_tgt_poll_group_001", 00:12:07.666 "admin_qpairs": 2, 00:12:07.666 "io_qpairs": 168, 00:12:07.666 "current_admin_qpairs": 0, 00:12:07.666 "current_io_qpairs": 0, 00:12:07.666 "pending_bdev_io": 0, 00:12:07.666 "completed_nvme_io": 267, 00:12:07.666 "transports": [ 00:12:07.666 { 00:12:07.666 "trtype": "TCP" 00:12:07.666 } 00:12:07.666 ] 00:12:07.666 }, 00:12:07.666 { 00:12:07.666 "name": "nvmf_tgt_poll_group_002", 00:12:07.666 "admin_qpairs": 1, 00:12:07.666 "io_qpairs": 168, 00:12:07.666 "current_admin_qpairs": 0, 00:12:07.666 "current_io_qpairs": 0, 00:12:07.666 "pending_bdev_io": 0, 00:12:07.666 "completed_nvme_io": 217, 00:12:07.666 "transports": [ 00:12:07.666 { 00:12:07.666 "trtype": "TCP" 00:12:07.666 } 00:12:07.666 ] 00:12:07.666 }, 00:12:07.666 { 00:12:07.666 "name": "nvmf_tgt_poll_group_003", 00:12:07.666 "admin_qpairs": 2, 00:12:07.666 "io_qpairs": 168, 00:12:07.666 "current_admin_qpairs": 0, 00:12:07.666 "current_io_qpairs": 0, 00:12:07.666 "pending_bdev_io": 0, 00:12:07.666 "completed_nvme_io": 268, 00:12:07.666 "transports": [ 00:12:07.666 { 00:12:07.666 "trtype": "TCP" 00:12:07.666 } 00:12:07.666 ] 00:12:07.666 } 00:12:07.666 ] 00:12:07.666 }' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.666 rmmod nvme_tcp 00:12:07.666 rmmod nvme_fabrics 00:12:07.666 rmmod nvme_keyring 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1515844 ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1515844 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1515844 ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1515844 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515844 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515844' 00:12:07.666 killing process with pid 1515844 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1515844 00:12:07.666 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1515844 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.925 19:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.463 19:18:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.463 00:12:10.463 real 0m31.938s 00:12:10.463 user 1m38.078s 00:12:10.463 sys 0m5.778s 00:12:10.463 19:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.463 19:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.463 ************************************ 00:12:10.464 END TEST nvmf_rpc 00:12:10.464 ************************************ 00:12:10.464 19:18:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:10.464 19:18:20 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:10.464 19:18:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.464 19:18:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.464 19:18:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.464 ************************************ 00:12:10.464 START TEST nvmf_invalid 00:12:10.464 ************************************ 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:10.464 * Looking for test storage... 00:12:10.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.464 19:18:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:15.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.738 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:15.739 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:15.739 Found net devices under 0000:86:00.0: cvl_0_0 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:15.739 Found net devices under 0000:86:00.1: cvl_0_1 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.739 19:18:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:15.739 00:12:15.739 --- 10.0.0.2 ping statistics --- 00:12:15.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.739 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:12:15.739 00:12:15.739 --- 10.0.0.1 ping statistics --- 00:12:15.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.739 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1523917 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1523917 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1523917 ']' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 [2024-07-15 19:18:26.139108] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:12:15.739 [2024-07-15 19:18:26.139153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.739 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.739 [2024-07-15 19:18:26.168746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.739 [2024-07-15 19:18:26.196646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.739 [2024-07-15 19:18:26.238862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.739 [2024-07-15 19:18:26.238901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.739 [2024-07-15 19:18:26.238909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.739 [2024-07-15 19:18:26.238915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.739 [2024-07-15 19:18:26.238920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.739 [2024-07-15 19:18:26.238956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.739 [2024-07-15 19:18:26.239058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.739 [2024-07-15 19:18:26.239136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.739 [2024-07-15 19:18:26.239137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8666 00:12:15.739 [2024-07-15 19:18:26.550866] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:15.739 { 00:12:15.739 "nqn": "nqn.2016-06.io.spdk:cnode8666", 00:12:15.739 "tgt_name": "foobar", 00:12:15.739 "method": "nvmf_create_subsystem", 00:12:15.739 "req_id": 1 00:12:15.739 } 00:12:15.739 Got JSON-RPC error response 00:12:15.739 response: 00:12:15.739 { 00:12:15.739 "code": -32603, 00:12:15.739 "message": "Unable to find target foobar" 00:12:15.739 }' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:15.739 { 00:12:15.739 "nqn": "nqn.2016-06.io.spdk:cnode8666", 00:12:15.739 "tgt_name": "foobar", 00:12:15.739 "method": "nvmf_create_subsystem", 00:12:15.739 "req_id": 1 00:12:15.739 } 00:12:15.739 Got JSON-RPC error response 00:12:15.739 response: 00:12:15.739 { 00:12:15.739 "code": -32603, 00:12:15.739 "message": "Unable to find target foobar" 00:12:15.739 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:15.739 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18240 00:12:15.999 [2024-07-15 19:18:26.747530] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18240: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:15.999 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:15.999 { 00:12:15.999 "nqn": "nqn.2016-06.io.spdk:cnode18240", 00:12:15.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.999 "method": "nvmf_create_subsystem", 00:12:15.999 "req_id": 1 00:12:15.999 } 00:12:15.999 Got JSON-RPC error response 00:12:15.999 response: 00:12:15.999 { 00:12:15.999 "code": -32602, 00:12:15.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.999 }' 00:12:15.999 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:15.999 { 00:12:15.999 "nqn": "nqn.2016-06.io.spdk:cnode18240", 00:12:15.999 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:15.999 "method": "nvmf_create_subsystem", 00:12:15.999 "req_id": 1 00:12:15.999 } 00:12:15.999 Got JSON-RPC error response 00:12:15.999 response: 00:12:15.999 { 00:12:15.999 "code": -32602, 00:12:15.999 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:15.999 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:15.999 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:15.999 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11265 00:12:16.259 [2024-07-15 19:18:26.940154] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11265: invalid model number 'SPDK_Controller' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:16.259 { 00:12:16.259 "nqn": "nqn.2016-06.io.spdk:cnode11265", 00:12:16.259 "model_number": "SPDK_Controller\u001f", 00:12:16.259 "method": "nvmf_create_subsystem", 00:12:16.259 "req_id": 1 00:12:16.259 } 00:12:16.259 Got JSON-RPC error response 00:12:16.259 response: 00:12:16.259 { 00:12:16.259 "code": -32602, 00:12:16.259 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.259 }' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:16.259 { 00:12:16.259 "nqn": "nqn.2016-06.io.spdk:cnode11265", 00:12:16.259 "model_number": "SPDK_Controller\u001f", 00:12:16.259 "method": "nvmf_create_subsystem", 00:12:16.259 "req_id": 1 00:12:16.259 } 00:12:16.259 Got JSON-RPC error response 00:12:16.259 response: 00:12:16.259 { 00:12:16.259 "code": -32602, 00:12:16.259 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.259 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:16.259 19:18:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:16.259 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Tka]w=>|@%;klBKa]hyVy' 00:12:16.260 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Tka]w=>|@%;klBKa]hyVy' nqn.2016-06.io.spdk:cnode2 00:12:16.520 [2024-07-15 19:18:27.257271] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: invalid serial number 'Tka]w=>|@%;klBKa]hyVy' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:16.520 { 00:12:16.520 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.520 "serial_number": "Tka]w=>|@%;klBKa]hyVy", 00:12:16.520 "method": "nvmf_create_subsystem", 00:12:16.520 "req_id": 1 00:12:16.520 } 00:12:16.520 Got JSON-RPC error response 00:12:16.520 response: 00:12:16.520 { 00:12:16.520 "code": -32602, 00:12:16.520 "message": "Invalid SN Tka]w=>|@%;klBKa]hyVy" 00:12:16.520 }' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:16.520 { 00:12:16.520 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.520 "serial_number": "Tka]w=>|@%;klBKa]hyVy", 00:12:16.520 "method": "nvmf_create_subsystem", 00:12:16.520 "req_id": 1 00:12:16.520 } 00:12:16.520 Got JSON-RPC error response 00:12:16.520 response: 00:12:16.520 { 00:12:16.520 "code": -32602, 00:12:16.520 "message": "Invalid SN Tka]w=>|@%;klBKa]hyVy" 00:12:16.520 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.520 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.780 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ':B{*W>+"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJmN' 00:12:16.781 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ':B{*W>+"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJmN' nqn.2016-06.io.spdk:cnode2811 00:12:17.040 [2024-07-15 19:18:27.698741] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2811: invalid model number ':B{*W>+"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJmN' 00:12:17.040 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:17.040 { 00:12:17.040 "nqn": "nqn.2016-06.io.spdk:cnode2811", 00:12:17.040 "model_number": ":B{*W>+\"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJm\u007fN", 00:12:17.040 "method": "nvmf_create_subsystem", 00:12:17.040 "req_id": 1 00:12:17.040 } 00:12:17.040 Got JSON-RPC error response 00:12:17.040 response: 00:12:17.040 { 00:12:17.040 "code": -32602, 00:12:17.040 "message": "Invalid MN :B{*W>+\"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJm\u007fN" 00:12:17.040 }' 00:12:17.040 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:17.040 { 00:12:17.040 "nqn": "nqn.2016-06.io.spdk:cnode2811", 00:12:17.040 "model_number": ":B{*W>+\"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJm\u007fN", 00:12:17.040 "method": "nvmf_create_subsystem", 00:12:17.040 "req_id": 1 00:12:17.040 } 00:12:17.040 Got JSON-RPC error response 00:12:17.040 response: 00:12:17.040 { 00:12:17.040 "code": -32602, 00:12:17.040 "message": "Invalid MN :B{*W>+\"?#W@m~9 %$[}Dcdc=<1;C3s4$Z@/dJm\u007fN" 00:12:17.040 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:17.040 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:17.040 [2024-07-15 19:18:27.891459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.299 19:18:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:17.299 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:17.299 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:17.299 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:17.299 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:17.299 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:17.558 [2024-07-15 19:18:28.280712] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:17.558 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:17.558 { 00:12:17.558 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.558 "listen_address": { 00:12:17.558 "trtype": "tcp", 00:12:17.558 "traddr": "", 00:12:17.558 "trsvcid": "4421" 00:12:17.558 }, 00:12:17.558 "method": "nvmf_subsystem_remove_listener", 00:12:17.558 "req_id": 1 00:12:17.558 } 00:12:17.558 Got JSON-RPC error response 00:12:17.558 response: 00:12:17.558 { 00:12:17.558 "code": -32602, 00:12:17.558 "message": "Invalid parameters" 00:12:17.558 }' 00:12:17.558 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:17.558 { 00:12:17.558 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:17.558 "listen_address": { 00:12:17.558 "trtype": "tcp", 00:12:17.558 "traddr": "", 00:12:17.558 "trsvcid": "4421" 00:12:17.558 }, 00:12:17.558 "method": "nvmf_subsystem_remove_listener", 00:12:17.558 "req_id": 1 00:12:17.558 } 00:12:17.558 Got JSON-RPC error response 00:12:17.558 response: 00:12:17.558 { 00:12:17.558 "code": -32602, 00:12:17.558 "message": "Invalid parameters" 00:12:17.558 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:17.558 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3953 -i 0 00:12:17.817 [2024-07-15 19:18:28.465286] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3953: invalid cntlid range [0-65519] 00:12:17.817 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:17.817 { 00:12:17.817 "nqn": "nqn.2016-06.io.spdk:cnode3953", 00:12:17.817 "min_cntlid": 0, 00:12:17.817 "method": "nvmf_create_subsystem", 00:12:17.817 "req_id": 1 00:12:17.817 } 00:12:17.817 Got JSON-RPC error response 00:12:17.817 response: 00:12:17.817 { 00:12:17.817 "code": -32602, 00:12:17.817 "message": "Invalid cntlid range [0-65519]" 00:12:17.817 }' 00:12:17.817 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:17.817 { 00:12:17.817 "nqn": "nqn.2016-06.io.spdk:cnode3953", 00:12:17.817 "min_cntlid": 0, 00:12:17.817 "method": "nvmf_create_subsystem", 00:12:17.817 "req_id": 1 00:12:17.817 } 00:12:17.817 Got JSON-RPC error response 00:12:17.817 response: 00:12:17.817 { 00:12:17.817 "code": -32602, 00:12:17.817 "message": "Invalid cntlid range [0-65519]" 00:12:17.817 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:17.817 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23611 -i 65520 00:12:17.817 [2024-07-15 19:18:28.649896] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23611: invalid cntlid range [65520-65519] 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:18.076 { 00:12:18.076 "nqn": "nqn.2016-06.io.spdk:cnode23611", 00:12:18.076 "min_cntlid": 65520, 00:12:18.076 "method": "nvmf_create_subsystem", 00:12:18.076 "req_id": 1 00:12:18.076 } 00:12:18.076 Got JSON-RPC error response 00:12:18.076 response: 00:12:18.076 { 00:12:18.076 "code": -32602, 00:12:18.076 "message": "Invalid cntlid range [65520-65519]" 00:12:18.076 }' 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:18.076 { 00:12:18.076 "nqn": "nqn.2016-06.io.spdk:cnode23611", 00:12:18.076 "min_cntlid": 65520, 00:12:18.076 "method": "nvmf_create_subsystem", 00:12:18.076 "req_id": 1 00:12:18.076 } 00:12:18.076 Got JSON-RPC error response 00:12:18.076 response: 00:12:18.076 { 00:12:18.076 "code": -32602, 00:12:18.076 "message": "Invalid cntlid range [65520-65519]" 00:12:18.076 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29947 -I 0 00:12:18.076 [2024-07-15 19:18:28.838563] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29947: invalid cntlid range [1-0] 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:18.076 { 00:12:18.076 "nqn": "nqn.2016-06.io.spdk:cnode29947", 00:12:18.076 "max_cntlid": 0, 00:12:18.076 "method": "nvmf_create_subsystem", 00:12:18.076 "req_id": 1 00:12:18.076 } 00:12:18.076 Got JSON-RPC error response 00:12:18.076 response: 00:12:18.076 { 00:12:18.076 "code": -32602, 00:12:18.076 "message": "Invalid cntlid range [1-0]" 00:12:18.076 }' 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:18.076 { 00:12:18.076 "nqn": "nqn.2016-06.io.spdk:cnode29947", 00:12:18.076 "max_cntlid": 0, 00:12:18.076 "method": "nvmf_create_subsystem", 00:12:18.076 "req_id": 1 00:12:18.076 } 00:12:18.076 Got JSON-RPC error response 00:12:18.076 response: 00:12:18.076 { 00:12:18.076 "code": -32602, 00:12:18.076 "message": "Invalid cntlid range [1-0]" 00:12:18.076 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.076 19:18:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13991 -I 65520 00:12:18.335 [2024-07-15 19:18:29.027198] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13991: invalid cntlid range [1-65520] 00:12:18.335 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:18.335 { 00:12:18.335 "nqn": "nqn.2016-06.io.spdk:cnode13991", 00:12:18.335 "max_cntlid": 65520, 00:12:18.335 "method": "nvmf_create_subsystem", 00:12:18.335 "req_id": 1 00:12:18.335 } 00:12:18.335 Got JSON-RPC error response 00:12:18.335 response: 00:12:18.335 { 00:12:18.335 "code": -32602, 00:12:18.335 "message": "Invalid cntlid range [1-65520]" 00:12:18.335 }' 00:12:18.335 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:18.335 { 00:12:18.335 "nqn": "nqn.2016-06.io.spdk:cnode13991", 00:12:18.335 "max_cntlid": 65520, 00:12:18.335 "method": "nvmf_create_subsystem", 00:12:18.335 "req_id": 1 00:12:18.335 } 00:12:18.335 Got JSON-RPC error response 00:12:18.335 response: 00:12:18.335 { 00:12:18.335 "code": -32602, 00:12:18.335 "message": "Invalid cntlid range [1-65520]" 00:12:18.335 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.335 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21455 -i 6 -I 5 00:12:18.594 [2024-07-15 19:18:29.215828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21455: invalid cntlid range [6-5] 00:12:18.594 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:18.594 { 00:12:18.594 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:12:18.594 "min_cntlid": 6, 00:12:18.594 "max_cntlid": 5, 00:12:18.594 "method": "nvmf_create_subsystem", 00:12:18.594 "req_id": 1 00:12:18.594 } 00:12:18.594 Got JSON-RPC error response 00:12:18.594 response: 00:12:18.594 { 00:12:18.594 "code": -32602, 00:12:18.594 "message": "Invalid cntlid range [6-5]" 00:12:18.594 }' 00:12:18.594 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:18.594 { 00:12:18.594 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:12:18.594 "min_cntlid": 6, 00:12:18.594 "max_cntlid": 5, 00:12:18.594 "method": "nvmf_create_subsystem", 00:12:18.594 "req_id": 1 00:12:18.594 } 00:12:18.594 Got JSON-RPC error response 00:12:18.594 response: 00:12:18.594 { 00:12:18.594 "code": -32602, 00:12:18.594 "message": "Invalid cntlid range [6-5]" 00:12:18.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.594 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:18.595 { 00:12:18.595 "name": "foobar", 00:12:18.595 "method": "nvmf_delete_target", 00:12:18.595 "req_id": 1 00:12:18.595 } 00:12:18.595 Got JSON-RPC error response 00:12:18.595 response: 00:12:18.595 { 00:12:18.595 "code": -32602, 00:12:18.595 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:18.595 }' 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:18.595 { 00:12:18.595 "name": "foobar", 00:12:18.595 "method": "nvmf_delete_target", 00:12:18.595 "req_id": 1 00:12:18.595 } 00:12:18.595 Got JSON-RPC error response 00:12:18.595 response: 00:12:18.595 { 00:12:18.595 "code": -32602, 00:12:18.595 "message": "The specified target doesn't exist, cannot delete it." 00:12:18.595 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.595 rmmod nvme_tcp 00:12:18.595 rmmod nvme_fabrics 00:12:18.595 rmmod nvme_keyring 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1523917 ']' 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1523917 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1523917 ']' 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1523917 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.595 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523917 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523917' 00:12:18.854 killing process with pid 1523917 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1523917 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1523917 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.854 19:18:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.390 19:18:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.390 00:12:21.390 real 0m10.858s 00:12:21.390 user 0m17.032s 00:12:21.390 sys 0m4.834s 00:12:21.390 19:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.390 19:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 ************************************ 00:12:21.390 END TEST nvmf_invalid 00:12:21.390 ************************************ 00:12:21.390 19:18:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:21.390 19:18:31 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.390 19:18:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.390 19:18:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.390 19:18:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 ************************************ 00:12:21.390 START TEST nvmf_abort 00:12:21.390 ************************************ 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:21.390 * Looking for test storage... 00:12:21.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.390 19:18:31 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.391 19:18:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.662 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.662 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.663 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.663 19:18:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:26.663 00:12:26.663 --- 10.0.0.2 ping statistics --- 00:12:26.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.663 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:12:26.663 00:12:26.663 --- 10.0.0.1 ping statistics --- 00:12:26.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.663 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1528071 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1528071 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1528071 ']' 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.663 [2024-07-15 19:18:37.184074] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:12:26.663 [2024-07-15 19:18:37.184115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.663 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.663 [2024-07-15 19:18:37.213506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:26.663 [2024-07-15 19:18:37.241106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.663 [2024-07-15 19:18:37.282014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.663 [2024-07-15 19:18:37.282051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.663 [2024-07-15 19:18:37.282058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.663 [2024-07-15 19:18:37.282063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.663 [2024-07-15 19:18:37.282068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.663 [2024-07-15 19:18:37.282171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.663 [2024-07-15 19:18:37.282259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.663 [2024-07-15 19:18:37.282260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 [2024-07-15 19:18:37.412242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 Malloc0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 Delay0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 [2024-07-15 19:18:37.489000] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.663 19:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:26.922 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.922 [2024-07-15 19:18:37.596183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:28.890 Initializing NVMe Controllers 00:12:28.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:28.890 controller IO queue size 128 less than required 00:12:28.890 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:28.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:28.890 Initialization complete. Launching workers. 00:12:28.890 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 41884 00:12:28.890 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41948, failed to submit 62 00:12:28.890 success 41888, unsuccess 60, failed 0 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.890 rmmod nvme_tcp 00:12:28.890 rmmod nvme_fabrics 00:12:28.890 rmmod nvme_keyring 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1528071 ']' 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1528071 00:12:28.890 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1528071 ']' 00:12:28.891 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1528071 00:12:28.891 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:28.891 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.891 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528071 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528071' 00:12:29.150 killing process with pid 1528071 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1528071 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1528071 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.150 19:18:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.685 19:18:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.685 00:12:31.685 real 0m10.240s 00:12:31.685 user 0m10.850s 00:12:31.685 sys 0m4.919s 00:12:31.685 19:18:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.685 19:18:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:31.685 ************************************ 00:12:31.685 END TEST nvmf_abort 00:12:31.685 ************************************ 00:12:31.685 19:18:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:31.685 19:18:42 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.685 19:18:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.685 19:18:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.685 19:18:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.685 ************************************ 00:12:31.685 START TEST nvmf_ns_hotplug_stress 00:12:31.685 ************************************ 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:31.685 * Looking for test storage... 00:12:31.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.685 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.686 19:18:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:36.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:36.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:36.967 Found net devices under 0000:86:00.0: cvl_0_0 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:36.967 Found net devices under 0000:86:00.1: cvl_0_1 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.967 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:36.967 00:12:36.968 --- 10.0.0.2 ping statistics --- 00:12:36.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.968 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:36.968 00:12:36.968 --- 10.0.0.1 ping statistics --- 00:12:36.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.968 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1532060 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1532060 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1532060 ']' 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.968 [2024-07-15 19:18:47.511634] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:12:36.968 [2024-07-15 19:18:47.511676] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.968 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.968 [2024-07-15 19:18:47.540805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.968 [2024-07-15 19:18:47.568039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.968 [2024-07-15 19:18:47.608273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.968 [2024-07-15 19:18:47.608311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.968 [2024-07-15 19:18:47.608317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.968 [2024-07-15 19:18:47.608323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.968 [2024-07-15 19:18:47.608328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.968 [2024-07-15 19:18:47.608429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.968 [2024-07-15 19:18:47.608534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.968 [2024-07-15 19:18:47.608535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:36.968 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.227 [2024-07-15 19:18:47.889380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.227 19:18:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.486 19:18:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.486 [2024-07-15 19:18:48.250697] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.486 19:18:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:37.746 19:18:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:38.005 Malloc0 00:12:38.005 19:18:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:38.005 Delay0 00:12:38.264 19:18:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.264 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:38.523 NULL1 00:12:38.523 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:38.782 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:38.782 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1532321 00:12:38.782 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:38.782 19:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.782 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.717 Read completed with error (sct=0, sc=11) 00:12:39.717 19:18:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:39.976 19:18:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:39.976 19:18:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:40.235 true 00:12:40.235 19:18:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:40.235 19:18:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.169 19:18:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.169 19:18:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:41.169 19:18:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:41.428 true 00:12:41.428 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:41.428 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.686 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.686 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:41.686 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:41.944 true 00:12:41.944 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:41.944 19:18:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 19:18:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.336 19:18:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:43.336 19:18:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:43.336 true 00:12:43.336 19:18:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:43.336 19:18:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:44.273 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.533 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:44.533 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:44.533 true 00:12:44.533 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:44.533 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.792 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.050 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:45.050 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:45.050 true 00:12:45.317 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:45.317 19:18:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.317 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.575 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:45.575 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:45.834 true 00:12:45.834 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:45.834 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.834 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.093 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:46.093 19:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:46.352 true 00:12:46.352 19:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:46.352 19:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.287 19:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.601 19:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:47.601 19:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:47.878 true 00:12:47.878 19:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:47.878 19:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.815 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.815 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:48.815 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:49.073 true 00:12:49.073 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:49.073 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.073 19:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.330 19:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:49.330 19:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:49.588 true 00:12:49.588 19:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:49.588 19:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.961 19:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.961 19:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:50.961 19:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:50.961 true 00:12:51.218 19:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:51.218 19:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.218 19:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.477 19:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:51.477 19:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:51.736 true 00:12:51.736 19:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:51.736 19:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 19:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.111 19:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:53.111 19:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:53.368 true 00:12:53.369 19:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:53.369 19:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.303 19:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.303 19:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:54.303 19:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:54.303 true 00:12:54.561 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:54.561 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.561 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.819 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:54.819 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:55.100 true 00:12:55.100 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:55.100 19:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.034 19:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.292 19:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:56.292 19:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:56.550 true 00:12:56.550 19:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:56.550 19:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.487 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.487 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:57.487 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:57.745 true 00:12:57.745 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:57.745 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.003 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.003 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:58.003 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:58.288 true 00:12:58.288 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:58.288 19:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 19:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.483 19:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:59.483 19:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:59.741 true 00:12:59.741 19:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:12:59.741 19:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.678 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:00.678 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:00.678 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:00.937 true 00:13:00.937 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:00.937 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.196 19:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.455 19:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:01.455 19:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:01.455 true 00:13:01.455 19:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:01.455 19:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.830 19:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.830 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.830 19:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:02.830 19:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:03.139 true 00:13:03.139 19:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:03.139 19:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.075 19:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.075 19:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:04.075 19:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:04.334 true 00:13:04.334 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:04.334 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.593 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.593 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:04.593 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:04.852 true 00:13:04.852 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:04.852 19:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 19:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.233 19:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:06.233 19:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:06.490 true 00:13:06.490 19:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:06.490 19:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.422 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.422 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:07.422 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:07.680 true 00:13:07.681 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:07.681 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.939 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.939 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:07.939 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:08.197 true 00:13:08.197 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:08.197 19:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.590 Initializing NVMe Controllers 00:13:09.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.590 Controller IO queue size 128, less than required. 00:13:09.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.590 Controller IO queue size 128, less than required. 00:13:09.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:09.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:09.590 Initialization complete. Launching workers. 00:13:09.590 ======================================================== 00:13:09.590 Latency(us) 00:13:09.590 Device Information : IOPS MiB/s Average min max 00:13:09.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1833.68 0.90 47610.62 2067.73 1055115.79 00:13:09.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17271.61 8.43 7411.08 2164.31 382317.27 00:13:09.590 ======================================================== 00:13:09.590 Total : 19105.29 9.33 11269.34 2067.73 1055115.79 00:13:09.590 00:13:09.590 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.590 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:09.590 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:09.849 true 00:13:09.849 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1532321 00:13:09.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1532321) - No such process 00:13:09.849 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1532321 00:13:09.849 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.849 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.107 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:10.107 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:10.107 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:10.107 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.107 19:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:10.365 null0 00:13:10.365 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.365 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.365 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:10.624 null1 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:10.624 null2 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.624 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:10.882 null3 00:13:10.882 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.882 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.882 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:11.141 null4 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:11.141 null5 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.141 19:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:11.399 null6 00:13:11.399 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.399 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.399 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:11.658 null7 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.658 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1537902 1537904 1537905 1537907 1537909 1537911 1537913 1537914 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.659 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.917 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.918 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.175 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.175 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.175 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.176 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.176 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.176 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.176 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.176 19:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.433 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.691 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.002 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.261 19:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.261 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.520 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.778 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.037 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.296 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.296 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.297 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.297 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.297 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.297 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.556 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.815 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.073 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.332 19:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.332 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.591 rmmod nvme_tcp 00:13:15.591 rmmod nvme_fabrics 00:13:15.591 rmmod nvme_keyring 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1532060 ']' 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1532060 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1532060 ']' 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1532060 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1532060 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1532060' 00:13:15.591 killing process with pid 1532060 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1532060 00:13:15.591 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1532060 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.850 19:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.754 19:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.754 00:13:17.754 real 0m46.452s 00:13:17.754 user 3m8.527s 00:13:17.754 sys 0m14.466s 00:13:17.754 19:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.754 19:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.754 ************************************ 00:13:17.754 END TEST nvmf_ns_hotplug_stress 00:13:17.754 ************************************ 00:13:17.754 19:19:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.754 19:19:28 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:17.754 19:19:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.754 19:19:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.754 19:19:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.754 ************************************ 00:13:17.754 START TEST nvmf_connect_stress 00:13:17.754 ************************************ 00:13:17.754 19:19:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.013 * Looking for test storage... 00:13:18.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.013 19:19:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.014 19:19:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:23.354 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.354 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:23.354 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:23.355 Found net devices under 0000:86:00.0: cvl_0_0 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:23.355 Found net devices under 0000:86:00.1: cvl_0_1 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.355 19:19:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:13:23.355 00:13:23.355 --- 10.0.0.2 ping statistics --- 00:13:23.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.355 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:23.355 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:23.615 00:13:23.615 --- 10.0.0.1 ping statistics --- 00:13:23.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.615 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1542228 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1542228 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1542228 ']' 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.615 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.615 [2024-07-15 19:19:34.296944] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:13:23.615 [2024-07-15 19:19:34.296986] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.615 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.615 [2024-07-15 19:19:34.327120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.615 [2024-07-15 19:19:34.354320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.615 [2024-07-15 19:19:34.393230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.615 [2024-07-15 19:19:34.393287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.615 [2024-07-15 19:19:34.393295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.615 [2024-07-15 19:19:34.393301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.615 [2024-07-15 19:19:34.393306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.615 [2024-07-15 19:19:34.393413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.615 [2024-07-15 19:19:34.393430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.615 [2024-07-15 19:19:34.393432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 [2024-07-15 19:19:34.530978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 [2024-07-15 19:19:34.564373] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 NULL1 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1542291 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.874 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.875 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.443 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.443 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:24.443 19:19:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.443 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.443 19:19:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.701 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.701 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:24.701 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.701 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.701 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.960 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.960 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:24.960 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.960 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.960 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.220 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.220 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:25.220 19:19:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.220 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.220 19:19:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.479 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.479 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:25.479 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.479 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.479 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.049 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:26.049 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.049 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.049 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.308 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.308 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:26.308 19:19:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.308 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.308 19:19:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.567 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.567 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:26.567 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.567 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.567 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.827 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.827 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:26.827 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.827 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.827 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.086 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.086 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:27.086 19:19:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.086 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.086 19:19:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.654 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.654 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:27.654 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.654 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.654 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.913 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.913 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:27.913 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.913 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.913 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.172 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.172 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:28.172 19:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.172 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.172 19:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.430 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.430 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:28.430 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.430 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.430 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.688 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.689 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:28.689 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.689 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.689 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.256 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.256 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:29.256 19:19:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.256 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.256 19:19:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.515 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.515 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:29.515 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.515 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.515 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.775 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.775 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:29.775 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.775 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.775 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.034 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.034 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:30.034 19:19:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.034 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.034 19:19:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.293 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.293 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:30.293 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.293 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.293 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.860 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.860 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:30.860 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.860 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.860 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.119 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.119 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:31.119 19:19:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.119 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.119 19:19:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.378 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.378 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:31.378 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.378 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.378 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.637 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.637 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:31.637 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.637 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.637 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.896 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.896 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:31.896 19:19:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.896 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.896 19:19:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.464 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.464 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:32.464 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.464 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.464 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.723 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.723 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:32.723 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.723 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.723 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.981 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.981 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:32.981 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.981 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.981 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.238 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.238 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:33.238 19:19:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.238 19:19:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.238 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.496 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.496 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:33.496 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.496 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.496 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.062 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.062 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:34.062 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.062 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.062 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.062 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1542291 00:13:34.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1542291) - No such process 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1542291 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.320 19:19:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.320 rmmod nvme_tcp 00:13:34.320 rmmod nvme_fabrics 00:13:34.320 rmmod nvme_keyring 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1542228 ']' 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1542228 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1542228 ']' 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1542228 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1542228 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1542228' 00:13:34.320 killing process with pid 1542228 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1542228 00:13:34.320 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1542228 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.578 19:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.507 19:19:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.507 00:13:36.507 real 0m18.705s 00:13:36.507 user 0m39.827s 00:13:36.507 sys 0m8.099s 00:13:36.507 19:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.507 19:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.507 ************************************ 00:13:36.507 END TEST nvmf_connect_stress 00:13:36.507 ************************************ 00:13:36.507 19:19:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.507 19:19:47 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:36.507 19:19:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.507 19:19:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.507 19:19:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.764 ************************************ 00:13:36.764 START TEST nvmf_fused_ordering 00:13:36.764 ************************************ 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:36.764 * Looking for test storage... 00:13:36.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.764 19:19:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.036 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.036 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.036 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.037 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.037 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:13:42.037 00:13:42.037 --- 10.0.0.2 ping statistics --- 00:13:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.037 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:13:42.037 00:13:42.037 --- 10.0.0.1 ping statistics --- 00:13:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.037 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1547434 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1547434 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1547434 ']' 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.037 [2024-07-15 19:19:52.477888] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:13:42.037 [2024-07-15 19:19:52.477932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.037 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.037 [2024-07-15 19:19:52.506990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.037 [2024-07-15 19:19:52.533331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.037 [2024-07-15 19:19:52.572068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.037 [2024-07-15 19:19:52.572107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.037 [2024-07-15 19:19:52.572114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.037 [2024-07-15 19:19:52.572120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.037 [2024-07-15 19:19:52.572125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.037 [2024-07-15 19:19:52.572143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 [2024-07-15 19:19:52.695747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 [2024-07-15 19:19:52.711897] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 NULL1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 19:19:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.038 19:19:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:42.038 [2024-07-15 19:19:52.764740] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:13:42.038 [2024-07-15 19:19:52.764778] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547455 ] 00:13:42.038 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.038 [2024-07-15 19:19:52.794110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.297 Attached to nqn.2016-06.io.spdk:cnode1 00:13:42.297 Namespace ID: 1 size: 1GB 00:13:42.297 fused_ordering(0) 00:13:42.297 fused_ordering(1) 00:13:42.297 fused_ordering(2) 00:13:42.297 fused_ordering(3) 00:13:42.297 fused_ordering(4) 00:13:42.297 fused_ordering(5) 00:13:42.297 fused_ordering(6) 00:13:42.297 fused_ordering(7) 00:13:42.297 fused_ordering(8) 00:13:42.297 fused_ordering(9) 00:13:42.297 fused_ordering(10) 00:13:42.297 fused_ordering(11) 00:13:42.297 fused_ordering(12) 00:13:42.297 fused_ordering(13) 00:13:42.297 fused_ordering(14) 00:13:42.297 fused_ordering(15) 00:13:42.297 fused_ordering(16) 00:13:42.297 fused_ordering(17) 00:13:42.297 fused_ordering(18) 00:13:42.297 fused_ordering(19) 00:13:42.297 fused_ordering(20) 00:13:42.297 fused_ordering(21) 00:13:42.297 fused_ordering(22) 00:13:42.297 fused_ordering(23) 00:13:42.297 fused_ordering(24) 00:13:42.297 fused_ordering(25) 00:13:42.297 fused_ordering(26) 00:13:42.297 fused_ordering(27) 00:13:42.297 fused_ordering(28) 00:13:42.297 fused_ordering(29) 00:13:42.297 fused_ordering(30) 00:13:42.297 fused_ordering(31) 00:13:42.297 fused_ordering(32) 00:13:42.297 fused_ordering(33) 00:13:42.297 fused_ordering(34) 00:13:42.297 fused_ordering(35) 00:13:42.297 fused_ordering(36) 00:13:42.297 fused_ordering(37) 00:13:42.297 fused_ordering(38) 00:13:42.297 fused_ordering(39) 00:13:42.297 fused_ordering(40) 00:13:42.297 fused_ordering(41) 00:13:42.297 fused_ordering(42) 00:13:42.297 fused_ordering(43) 00:13:42.297 fused_ordering(44) 00:13:42.297 fused_ordering(45) 00:13:42.297 fused_ordering(46) 00:13:42.297 fused_ordering(47) 00:13:42.297 fused_ordering(48) 00:13:42.297 fused_ordering(49) 00:13:42.297 fused_ordering(50) 00:13:42.297 fused_ordering(51) 00:13:42.297 fused_ordering(52) 00:13:42.297 fused_ordering(53) 00:13:42.297 fused_ordering(54) 00:13:42.297 fused_ordering(55) 00:13:42.297 fused_ordering(56) 00:13:42.297 fused_ordering(57) 00:13:42.297 fused_ordering(58) 00:13:42.297 fused_ordering(59) 00:13:42.297 fused_ordering(60) 00:13:42.297 fused_ordering(61) 00:13:42.297 fused_ordering(62) 00:13:42.297 fused_ordering(63) 00:13:42.297 fused_ordering(64) 00:13:42.297 fused_ordering(65) 00:13:42.297 fused_ordering(66) 00:13:42.297 fused_ordering(67) 00:13:42.297 fused_ordering(68) 00:13:42.297 fused_ordering(69) 00:13:42.297 fused_ordering(70) 00:13:42.298 fused_ordering(71) 00:13:42.298 fused_ordering(72) 00:13:42.298 fused_ordering(73) 00:13:42.298 fused_ordering(74) 00:13:42.298 fused_ordering(75) 00:13:42.298 fused_ordering(76) 00:13:42.298 fused_ordering(77) 00:13:42.298 fused_ordering(78) 00:13:42.298 fused_ordering(79) 00:13:42.298 fused_ordering(80) 00:13:42.298 fused_ordering(81) 00:13:42.298 fused_ordering(82) 00:13:42.298 fused_ordering(83) 00:13:42.298 fused_ordering(84) 00:13:42.298 fused_ordering(85) 00:13:42.298 fused_ordering(86) 00:13:42.298 fused_ordering(87) 00:13:42.298 fused_ordering(88) 00:13:42.298 fused_ordering(89) 00:13:42.298 fused_ordering(90) 00:13:42.298 fused_ordering(91) 00:13:42.298 fused_ordering(92) 00:13:42.298 fused_ordering(93) 00:13:42.298 fused_ordering(94) 00:13:42.298 fused_ordering(95) 00:13:42.298 fused_ordering(96) 00:13:42.298 fused_ordering(97) 00:13:42.298 fused_ordering(98) 00:13:42.298 fused_ordering(99) 00:13:42.298 fused_ordering(100) 00:13:42.298 fused_ordering(101) 00:13:42.298 fused_ordering(102) 00:13:42.298 fused_ordering(103) 00:13:42.298 fused_ordering(104) 00:13:42.298 fused_ordering(105) 00:13:42.298 fused_ordering(106) 00:13:42.298 fused_ordering(107) 00:13:42.298 fused_ordering(108) 00:13:42.298 fused_ordering(109) 00:13:42.298 fused_ordering(110) 00:13:42.298 fused_ordering(111) 00:13:42.298 fused_ordering(112) 00:13:42.298 fused_ordering(113) 00:13:42.298 fused_ordering(114) 00:13:42.298 fused_ordering(115) 00:13:42.298 fused_ordering(116) 00:13:42.298 fused_ordering(117) 00:13:42.298 fused_ordering(118) 00:13:42.298 fused_ordering(119) 00:13:42.298 fused_ordering(120) 00:13:42.298 fused_ordering(121) 00:13:42.298 fused_ordering(122) 00:13:42.298 fused_ordering(123) 00:13:42.298 fused_ordering(124) 00:13:42.298 fused_ordering(125) 00:13:42.298 fused_ordering(126) 00:13:42.298 fused_ordering(127) 00:13:42.298 fused_ordering(128) 00:13:42.298 fused_ordering(129) 00:13:42.298 fused_ordering(130) 00:13:42.298 fused_ordering(131) 00:13:42.298 fused_ordering(132) 00:13:42.298 fused_ordering(133) 00:13:42.298 fused_ordering(134) 00:13:42.298 fused_ordering(135) 00:13:42.298 fused_ordering(136) 00:13:42.298 fused_ordering(137) 00:13:42.298 fused_ordering(138) 00:13:42.298 fused_ordering(139) 00:13:42.298 fused_ordering(140) 00:13:42.298 fused_ordering(141) 00:13:42.298 fused_ordering(142) 00:13:42.298 fused_ordering(143) 00:13:42.298 fused_ordering(144) 00:13:42.298 fused_ordering(145) 00:13:42.298 fused_ordering(146) 00:13:42.298 fused_ordering(147) 00:13:42.298 fused_ordering(148) 00:13:42.298 fused_ordering(149) 00:13:42.298 fused_ordering(150) 00:13:42.298 fused_ordering(151) 00:13:42.298 fused_ordering(152) 00:13:42.298 fused_ordering(153) 00:13:42.298 fused_ordering(154) 00:13:42.298 fused_ordering(155) 00:13:42.298 fused_ordering(156) 00:13:42.298 fused_ordering(157) 00:13:42.298 fused_ordering(158) 00:13:42.298 fused_ordering(159) 00:13:42.298 fused_ordering(160) 00:13:42.298 fused_ordering(161) 00:13:42.298 fused_ordering(162) 00:13:42.298 fused_ordering(163) 00:13:42.298 fused_ordering(164) 00:13:42.298 fused_ordering(165) 00:13:42.298 fused_ordering(166) 00:13:42.298 fused_ordering(167) 00:13:42.298 fused_ordering(168) 00:13:42.298 fused_ordering(169) 00:13:42.298 fused_ordering(170) 00:13:42.298 fused_ordering(171) 00:13:42.298 fused_ordering(172) 00:13:42.298 fused_ordering(173) 00:13:42.298 fused_ordering(174) 00:13:42.298 fused_ordering(175) 00:13:42.298 fused_ordering(176) 00:13:42.298 fused_ordering(177) 00:13:42.298 fused_ordering(178) 00:13:42.298 fused_ordering(179) 00:13:42.298 fused_ordering(180) 00:13:42.298 fused_ordering(181) 00:13:42.298 fused_ordering(182) 00:13:42.298 fused_ordering(183) 00:13:42.298 fused_ordering(184) 00:13:42.298 fused_ordering(185) 00:13:42.298 fused_ordering(186) 00:13:42.298 fused_ordering(187) 00:13:42.298 fused_ordering(188) 00:13:42.298 fused_ordering(189) 00:13:42.298 fused_ordering(190) 00:13:42.298 fused_ordering(191) 00:13:42.298 fused_ordering(192) 00:13:42.298 fused_ordering(193) 00:13:42.298 fused_ordering(194) 00:13:42.298 fused_ordering(195) 00:13:42.298 fused_ordering(196) 00:13:42.298 fused_ordering(197) 00:13:42.298 fused_ordering(198) 00:13:42.298 fused_ordering(199) 00:13:42.298 fused_ordering(200) 00:13:42.298 fused_ordering(201) 00:13:42.298 fused_ordering(202) 00:13:42.298 fused_ordering(203) 00:13:42.298 fused_ordering(204) 00:13:42.298 fused_ordering(205) 00:13:42.558 fused_ordering(206) 00:13:42.558 fused_ordering(207) 00:13:42.558 fused_ordering(208) 00:13:42.558 fused_ordering(209) 00:13:42.558 fused_ordering(210) 00:13:42.558 fused_ordering(211) 00:13:42.558 fused_ordering(212) 00:13:42.558 fused_ordering(213) 00:13:42.558 fused_ordering(214) 00:13:42.558 fused_ordering(215) 00:13:42.558 fused_ordering(216) 00:13:42.558 fused_ordering(217) 00:13:42.558 fused_ordering(218) 00:13:42.558 fused_ordering(219) 00:13:42.558 fused_ordering(220) 00:13:42.558 fused_ordering(221) 00:13:42.558 fused_ordering(222) 00:13:42.558 fused_ordering(223) 00:13:42.558 fused_ordering(224) 00:13:42.558 fused_ordering(225) 00:13:42.558 fused_ordering(226) 00:13:42.558 fused_ordering(227) 00:13:42.558 fused_ordering(228) 00:13:42.558 fused_ordering(229) 00:13:42.558 fused_ordering(230) 00:13:42.558 fused_ordering(231) 00:13:42.558 fused_ordering(232) 00:13:42.558 fused_ordering(233) 00:13:42.558 fused_ordering(234) 00:13:42.558 fused_ordering(235) 00:13:42.558 fused_ordering(236) 00:13:42.558 fused_ordering(237) 00:13:42.558 fused_ordering(238) 00:13:42.558 fused_ordering(239) 00:13:42.558 fused_ordering(240) 00:13:42.558 fused_ordering(241) 00:13:42.558 fused_ordering(242) 00:13:42.558 fused_ordering(243) 00:13:42.558 fused_ordering(244) 00:13:42.558 fused_ordering(245) 00:13:42.558 fused_ordering(246) 00:13:42.558 fused_ordering(247) 00:13:42.558 fused_ordering(248) 00:13:42.558 fused_ordering(249) 00:13:42.558 fused_ordering(250) 00:13:42.558 fused_ordering(251) 00:13:42.558 fused_ordering(252) 00:13:42.558 fused_ordering(253) 00:13:42.558 fused_ordering(254) 00:13:42.558 fused_ordering(255) 00:13:42.558 fused_ordering(256) 00:13:42.558 fused_ordering(257) 00:13:42.558 fused_ordering(258) 00:13:42.558 fused_ordering(259) 00:13:42.558 fused_ordering(260) 00:13:42.558 fused_ordering(261) 00:13:42.558 fused_ordering(262) 00:13:42.558 fused_ordering(263) 00:13:42.558 fused_ordering(264) 00:13:42.558 fused_ordering(265) 00:13:42.558 fused_ordering(266) 00:13:42.558 fused_ordering(267) 00:13:42.558 fused_ordering(268) 00:13:42.558 fused_ordering(269) 00:13:42.558 fused_ordering(270) 00:13:42.558 fused_ordering(271) 00:13:42.558 fused_ordering(272) 00:13:42.558 fused_ordering(273) 00:13:42.558 fused_ordering(274) 00:13:42.558 fused_ordering(275) 00:13:42.558 fused_ordering(276) 00:13:42.558 fused_ordering(277) 00:13:42.558 fused_ordering(278) 00:13:42.558 fused_ordering(279) 00:13:42.558 fused_ordering(280) 00:13:42.558 fused_ordering(281) 00:13:42.558 fused_ordering(282) 00:13:42.558 fused_ordering(283) 00:13:42.558 fused_ordering(284) 00:13:42.558 fused_ordering(285) 00:13:42.558 fused_ordering(286) 00:13:42.558 fused_ordering(287) 00:13:42.558 fused_ordering(288) 00:13:42.558 fused_ordering(289) 00:13:42.558 fused_ordering(290) 00:13:42.558 fused_ordering(291) 00:13:42.558 fused_ordering(292) 00:13:42.558 fused_ordering(293) 00:13:42.558 fused_ordering(294) 00:13:42.558 fused_ordering(295) 00:13:42.558 fused_ordering(296) 00:13:42.558 fused_ordering(297) 00:13:42.558 fused_ordering(298) 00:13:42.558 fused_ordering(299) 00:13:42.558 fused_ordering(300) 00:13:42.558 fused_ordering(301) 00:13:42.558 fused_ordering(302) 00:13:42.558 fused_ordering(303) 00:13:42.558 fused_ordering(304) 00:13:42.558 fused_ordering(305) 00:13:42.558 fused_ordering(306) 00:13:42.558 fused_ordering(307) 00:13:42.558 fused_ordering(308) 00:13:42.558 fused_ordering(309) 00:13:42.558 fused_ordering(310) 00:13:42.558 fused_ordering(311) 00:13:42.558 fused_ordering(312) 00:13:42.558 fused_ordering(313) 00:13:42.558 fused_ordering(314) 00:13:42.558 fused_ordering(315) 00:13:42.558 fused_ordering(316) 00:13:42.558 fused_ordering(317) 00:13:42.558 fused_ordering(318) 00:13:42.558 fused_ordering(319) 00:13:42.558 fused_ordering(320) 00:13:42.558 fused_ordering(321) 00:13:42.558 fused_ordering(322) 00:13:42.558 fused_ordering(323) 00:13:42.558 fused_ordering(324) 00:13:42.558 fused_ordering(325) 00:13:42.558 fused_ordering(326) 00:13:42.558 fused_ordering(327) 00:13:42.558 fused_ordering(328) 00:13:42.558 fused_ordering(329) 00:13:42.558 fused_ordering(330) 00:13:42.558 fused_ordering(331) 00:13:42.558 fused_ordering(332) 00:13:42.558 fused_ordering(333) 00:13:42.558 fused_ordering(334) 00:13:42.558 fused_ordering(335) 00:13:42.558 fused_ordering(336) 00:13:42.558 fused_ordering(337) 00:13:42.558 fused_ordering(338) 00:13:42.558 fused_ordering(339) 00:13:42.558 fused_ordering(340) 00:13:42.558 fused_ordering(341) 00:13:42.558 fused_ordering(342) 00:13:42.558 fused_ordering(343) 00:13:42.558 fused_ordering(344) 00:13:42.558 fused_ordering(345) 00:13:42.558 fused_ordering(346) 00:13:42.558 fused_ordering(347) 00:13:42.558 fused_ordering(348) 00:13:42.558 fused_ordering(349) 00:13:42.558 fused_ordering(350) 00:13:42.558 fused_ordering(351) 00:13:42.558 fused_ordering(352) 00:13:42.559 fused_ordering(353) 00:13:42.559 fused_ordering(354) 00:13:42.559 fused_ordering(355) 00:13:42.559 fused_ordering(356) 00:13:42.559 fused_ordering(357) 00:13:42.559 fused_ordering(358) 00:13:42.559 fused_ordering(359) 00:13:42.559 fused_ordering(360) 00:13:42.559 fused_ordering(361) 00:13:42.559 fused_ordering(362) 00:13:42.559 fused_ordering(363) 00:13:42.559 fused_ordering(364) 00:13:42.559 fused_ordering(365) 00:13:42.559 fused_ordering(366) 00:13:42.559 fused_ordering(367) 00:13:42.559 fused_ordering(368) 00:13:42.559 fused_ordering(369) 00:13:42.559 fused_ordering(370) 00:13:42.559 fused_ordering(371) 00:13:42.559 fused_ordering(372) 00:13:42.559 fused_ordering(373) 00:13:42.559 fused_ordering(374) 00:13:42.559 fused_ordering(375) 00:13:42.559 fused_ordering(376) 00:13:42.559 fused_ordering(377) 00:13:42.559 fused_ordering(378) 00:13:42.559 fused_ordering(379) 00:13:42.559 fused_ordering(380) 00:13:42.559 fused_ordering(381) 00:13:42.559 fused_ordering(382) 00:13:42.559 fused_ordering(383) 00:13:42.559 fused_ordering(384) 00:13:42.559 fused_ordering(385) 00:13:42.559 fused_ordering(386) 00:13:42.559 fused_ordering(387) 00:13:42.559 fused_ordering(388) 00:13:42.559 fused_ordering(389) 00:13:42.559 fused_ordering(390) 00:13:42.559 fused_ordering(391) 00:13:42.559 fused_ordering(392) 00:13:42.559 fused_ordering(393) 00:13:42.559 fused_ordering(394) 00:13:42.559 fused_ordering(395) 00:13:42.559 fused_ordering(396) 00:13:42.559 fused_ordering(397) 00:13:42.559 fused_ordering(398) 00:13:42.559 fused_ordering(399) 00:13:42.559 fused_ordering(400) 00:13:42.559 fused_ordering(401) 00:13:42.559 fused_ordering(402) 00:13:42.559 fused_ordering(403) 00:13:42.559 fused_ordering(404) 00:13:42.559 fused_ordering(405) 00:13:42.559 fused_ordering(406) 00:13:42.559 fused_ordering(407) 00:13:42.559 fused_ordering(408) 00:13:42.559 fused_ordering(409) 00:13:42.559 fused_ordering(410) 00:13:43.127 fused_ordering(411) 00:13:43.127 fused_ordering(412) 00:13:43.128 fused_ordering(413) 00:13:43.128 fused_ordering(414) 00:13:43.128 fused_ordering(415) 00:13:43.128 fused_ordering(416) 00:13:43.128 fused_ordering(417) 00:13:43.128 fused_ordering(418) 00:13:43.128 fused_ordering(419) 00:13:43.128 fused_ordering(420) 00:13:43.128 fused_ordering(421) 00:13:43.128 fused_ordering(422) 00:13:43.128 fused_ordering(423) 00:13:43.128 fused_ordering(424) 00:13:43.128 fused_ordering(425) 00:13:43.128 fused_ordering(426) 00:13:43.128 fused_ordering(427) 00:13:43.128 fused_ordering(428) 00:13:43.128 fused_ordering(429) 00:13:43.128 fused_ordering(430) 00:13:43.128 fused_ordering(431) 00:13:43.128 fused_ordering(432) 00:13:43.128 fused_ordering(433) 00:13:43.128 fused_ordering(434) 00:13:43.128 fused_ordering(435) 00:13:43.128 fused_ordering(436) 00:13:43.128 fused_ordering(437) 00:13:43.128 fused_ordering(438) 00:13:43.128 fused_ordering(439) 00:13:43.128 fused_ordering(440) 00:13:43.128 fused_ordering(441) 00:13:43.128 fused_ordering(442) 00:13:43.128 fused_ordering(443) 00:13:43.128 fused_ordering(444) 00:13:43.128 fused_ordering(445) 00:13:43.128 fused_ordering(446) 00:13:43.128 fused_ordering(447) 00:13:43.128 fused_ordering(448) 00:13:43.128 fused_ordering(449) 00:13:43.128 fused_ordering(450) 00:13:43.128 fused_ordering(451) 00:13:43.128 fused_ordering(452) 00:13:43.128 fused_ordering(453) 00:13:43.128 fused_ordering(454) 00:13:43.128 fused_ordering(455) 00:13:43.128 fused_ordering(456) 00:13:43.128 fused_ordering(457) 00:13:43.128 fused_ordering(458) 00:13:43.128 fused_ordering(459) 00:13:43.128 fused_ordering(460) 00:13:43.128 fused_ordering(461) 00:13:43.128 fused_ordering(462) 00:13:43.128 fused_ordering(463) 00:13:43.128 fused_ordering(464) 00:13:43.128 fused_ordering(465) 00:13:43.128 fused_ordering(466) 00:13:43.128 fused_ordering(467) 00:13:43.128 fused_ordering(468) 00:13:43.128 fused_ordering(469) 00:13:43.128 fused_ordering(470) 00:13:43.128 fused_ordering(471) 00:13:43.128 fused_ordering(472) 00:13:43.128 fused_ordering(473) 00:13:43.128 fused_ordering(474) 00:13:43.128 fused_ordering(475) 00:13:43.128 fused_ordering(476) 00:13:43.128 fused_ordering(477) 00:13:43.128 fused_ordering(478) 00:13:43.128 fused_ordering(479) 00:13:43.128 fused_ordering(480) 00:13:43.128 fused_ordering(481) 00:13:43.128 fused_ordering(482) 00:13:43.128 fused_ordering(483) 00:13:43.128 fused_ordering(484) 00:13:43.128 fused_ordering(485) 00:13:43.128 fused_ordering(486) 00:13:43.128 fused_ordering(487) 00:13:43.128 fused_ordering(488) 00:13:43.128 fused_ordering(489) 00:13:43.128 fused_ordering(490) 00:13:43.128 fused_ordering(491) 00:13:43.128 fused_ordering(492) 00:13:43.128 fused_ordering(493) 00:13:43.128 fused_ordering(494) 00:13:43.128 fused_ordering(495) 00:13:43.128 fused_ordering(496) 00:13:43.128 fused_ordering(497) 00:13:43.128 fused_ordering(498) 00:13:43.128 fused_ordering(499) 00:13:43.128 fused_ordering(500) 00:13:43.128 fused_ordering(501) 00:13:43.128 fused_ordering(502) 00:13:43.128 fused_ordering(503) 00:13:43.128 fused_ordering(504) 00:13:43.128 fused_ordering(505) 00:13:43.128 fused_ordering(506) 00:13:43.128 fused_ordering(507) 00:13:43.128 fused_ordering(508) 00:13:43.128 fused_ordering(509) 00:13:43.128 fused_ordering(510) 00:13:43.128 fused_ordering(511) 00:13:43.128 fused_ordering(512) 00:13:43.128 fused_ordering(513) 00:13:43.128 fused_ordering(514) 00:13:43.128 fused_ordering(515) 00:13:43.128 fused_ordering(516) 00:13:43.128 fused_ordering(517) 00:13:43.128 fused_ordering(518) 00:13:43.128 fused_ordering(519) 00:13:43.128 fused_ordering(520) 00:13:43.128 fused_ordering(521) 00:13:43.128 fused_ordering(522) 00:13:43.128 fused_ordering(523) 00:13:43.128 fused_ordering(524) 00:13:43.128 fused_ordering(525) 00:13:43.128 fused_ordering(526) 00:13:43.128 fused_ordering(527) 00:13:43.128 fused_ordering(528) 00:13:43.128 fused_ordering(529) 00:13:43.128 fused_ordering(530) 00:13:43.128 fused_ordering(531) 00:13:43.128 fused_ordering(532) 00:13:43.128 fused_ordering(533) 00:13:43.128 fused_ordering(534) 00:13:43.128 fused_ordering(535) 00:13:43.128 fused_ordering(536) 00:13:43.128 fused_ordering(537) 00:13:43.128 fused_ordering(538) 00:13:43.128 fused_ordering(539) 00:13:43.128 fused_ordering(540) 00:13:43.128 fused_ordering(541) 00:13:43.128 fused_ordering(542) 00:13:43.128 fused_ordering(543) 00:13:43.128 fused_ordering(544) 00:13:43.128 fused_ordering(545) 00:13:43.128 fused_ordering(546) 00:13:43.128 fused_ordering(547) 00:13:43.128 fused_ordering(548) 00:13:43.128 fused_ordering(549) 00:13:43.128 fused_ordering(550) 00:13:43.128 fused_ordering(551) 00:13:43.128 fused_ordering(552) 00:13:43.128 fused_ordering(553) 00:13:43.128 fused_ordering(554) 00:13:43.128 fused_ordering(555) 00:13:43.128 fused_ordering(556) 00:13:43.128 fused_ordering(557) 00:13:43.128 fused_ordering(558) 00:13:43.128 fused_ordering(559) 00:13:43.128 fused_ordering(560) 00:13:43.128 fused_ordering(561) 00:13:43.128 fused_ordering(562) 00:13:43.128 fused_ordering(563) 00:13:43.128 fused_ordering(564) 00:13:43.128 fused_ordering(565) 00:13:43.128 fused_ordering(566) 00:13:43.128 fused_ordering(567) 00:13:43.128 fused_ordering(568) 00:13:43.128 fused_ordering(569) 00:13:43.128 fused_ordering(570) 00:13:43.128 fused_ordering(571) 00:13:43.128 fused_ordering(572) 00:13:43.128 fused_ordering(573) 00:13:43.128 fused_ordering(574) 00:13:43.128 fused_ordering(575) 00:13:43.128 fused_ordering(576) 00:13:43.128 fused_ordering(577) 00:13:43.128 fused_ordering(578) 00:13:43.128 fused_ordering(579) 00:13:43.128 fused_ordering(580) 00:13:43.128 fused_ordering(581) 00:13:43.128 fused_ordering(582) 00:13:43.128 fused_ordering(583) 00:13:43.128 fused_ordering(584) 00:13:43.128 fused_ordering(585) 00:13:43.128 fused_ordering(586) 00:13:43.128 fused_ordering(587) 00:13:43.128 fused_ordering(588) 00:13:43.128 fused_ordering(589) 00:13:43.128 fused_ordering(590) 00:13:43.128 fused_ordering(591) 00:13:43.128 fused_ordering(592) 00:13:43.128 fused_ordering(593) 00:13:43.128 fused_ordering(594) 00:13:43.128 fused_ordering(595) 00:13:43.128 fused_ordering(596) 00:13:43.128 fused_ordering(597) 00:13:43.128 fused_ordering(598) 00:13:43.128 fused_ordering(599) 00:13:43.128 fused_ordering(600) 00:13:43.128 fused_ordering(601) 00:13:43.128 fused_ordering(602) 00:13:43.128 fused_ordering(603) 00:13:43.128 fused_ordering(604) 00:13:43.128 fused_ordering(605) 00:13:43.128 fused_ordering(606) 00:13:43.128 fused_ordering(607) 00:13:43.128 fused_ordering(608) 00:13:43.128 fused_ordering(609) 00:13:43.128 fused_ordering(610) 00:13:43.128 fused_ordering(611) 00:13:43.128 fused_ordering(612) 00:13:43.128 fused_ordering(613) 00:13:43.128 fused_ordering(614) 00:13:43.128 fused_ordering(615) 00:13:43.388 fused_ordering(616) 00:13:43.388 fused_ordering(617) 00:13:43.388 fused_ordering(618) 00:13:43.388 fused_ordering(619) 00:13:43.388 fused_ordering(620) 00:13:43.388 fused_ordering(621) 00:13:43.388 fused_ordering(622) 00:13:43.388 fused_ordering(623) 00:13:43.388 fused_ordering(624) 00:13:43.388 fused_ordering(625) 00:13:43.388 fused_ordering(626) 00:13:43.388 fused_ordering(627) 00:13:43.388 fused_ordering(628) 00:13:43.388 fused_ordering(629) 00:13:43.388 fused_ordering(630) 00:13:43.388 fused_ordering(631) 00:13:43.388 fused_ordering(632) 00:13:43.388 fused_ordering(633) 00:13:43.388 fused_ordering(634) 00:13:43.388 fused_ordering(635) 00:13:43.388 fused_ordering(636) 00:13:43.388 fused_ordering(637) 00:13:43.388 fused_ordering(638) 00:13:43.388 fused_ordering(639) 00:13:43.388 fused_ordering(640) 00:13:43.388 fused_ordering(641) 00:13:43.388 fused_ordering(642) 00:13:43.388 fused_ordering(643) 00:13:43.388 fused_ordering(644) 00:13:43.388 fused_ordering(645) 00:13:43.388 fused_ordering(646) 00:13:43.388 fused_ordering(647) 00:13:43.388 fused_ordering(648) 00:13:43.388 fused_ordering(649) 00:13:43.388 fused_ordering(650) 00:13:43.388 fused_ordering(651) 00:13:43.388 fused_ordering(652) 00:13:43.388 fused_ordering(653) 00:13:43.388 fused_ordering(654) 00:13:43.388 fused_ordering(655) 00:13:43.388 fused_ordering(656) 00:13:43.388 fused_ordering(657) 00:13:43.388 fused_ordering(658) 00:13:43.388 fused_ordering(659) 00:13:43.388 fused_ordering(660) 00:13:43.388 fused_ordering(661) 00:13:43.388 fused_ordering(662) 00:13:43.388 fused_ordering(663) 00:13:43.388 fused_ordering(664) 00:13:43.388 fused_ordering(665) 00:13:43.388 fused_ordering(666) 00:13:43.388 fused_ordering(667) 00:13:43.388 fused_ordering(668) 00:13:43.388 fused_ordering(669) 00:13:43.388 fused_ordering(670) 00:13:43.388 fused_ordering(671) 00:13:43.388 fused_ordering(672) 00:13:43.388 fused_ordering(673) 00:13:43.388 fused_ordering(674) 00:13:43.388 fused_ordering(675) 00:13:43.388 fused_ordering(676) 00:13:43.388 fused_ordering(677) 00:13:43.388 fused_ordering(678) 00:13:43.388 fused_ordering(679) 00:13:43.388 fused_ordering(680) 00:13:43.388 fused_ordering(681) 00:13:43.388 fused_ordering(682) 00:13:43.388 fused_ordering(683) 00:13:43.388 fused_ordering(684) 00:13:43.388 fused_ordering(685) 00:13:43.388 fused_ordering(686) 00:13:43.388 fused_ordering(687) 00:13:43.388 fused_ordering(688) 00:13:43.388 fused_ordering(689) 00:13:43.388 fused_ordering(690) 00:13:43.388 fused_ordering(691) 00:13:43.388 fused_ordering(692) 00:13:43.388 fused_ordering(693) 00:13:43.388 fused_ordering(694) 00:13:43.388 fused_ordering(695) 00:13:43.388 fused_ordering(696) 00:13:43.388 fused_ordering(697) 00:13:43.388 fused_ordering(698) 00:13:43.388 fused_ordering(699) 00:13:43.388 fused_ordering(700) 00:13:43.388 fused_ordering(701) 00:13:43.388 fused_ordering(702) 00:13:43.388 fused_ordering(703) 00:13:43.388 fused_ordering(704) 00:13:43.388 fused_ordering(705) 00:13:43.388 fused_ordering(706) 00:13:43.388 fused_ordering(707) 00:13:43.388 fused_ordering(708) 00:13:43.388 fused_ordering(709) 00:13:43.388 fused_ordering(710) 00:13:43.388 fused_ordering(711) 00:13:43.388 fused_ordering(712) 00:13:43.388 fused_ordering(713) 00:13:43.388 fused_ordering(714) 00:13:43.388 fused_ordering(715) 00:13:43.388 fused_ordering(716) 00:13:43.388 fused_ordering(717) 00:13:43.388 fused_ordering(718) 00:13:43.388 fused_ordering(719) 00:13:43.388 fused_ordering(720) 00:13:43.388 fused_ordering(721) 00:13:43.388 fused_ordering(722) 00:13:43.388 fused_ordering(723) 00:13:43.388 fused_ordering(724) 00:13:43.388 fused_ordering(725) 00:13:43.388 fused_ordering(726) 00:13:43.388 fused_ordering(727) 00:13:43.388 fused_ordering(728) 00:13:43.388 fused_ordering(729) 00:13:43.388 fused_ordering(730) 00:13:43.388 fused_ordering(731) 00:13:43.388 fused_ordering(732) 00:13:43.388 fused_ordering(733) 00:13:43.388 fused_ordering(734) 00:13:43.388 fused_ordering(735) 00:13:43.388 fused_ordering(736) 00:13:43.388 fused_ordering(737) 00:13:43.388 fused_ordering(738) 00:13:43.388 fused_ordering(739) 00:13:43.388 fused_ordering(740) 00:13:43.388 fused_ordering(741) 00:13:43.388 fused_ordering(742) 00:13:43.388 fused_ordering(743) 00:13:43.388 fused_ordering(744) 00:13:43.388 fused_ordering(745) 00:13:43.388 fused_ordering(746) 00:13:43.388 fused_ordering(747) 00:13:43.388 fused_ordering(748) 00:13:43.388 fused_ordering(749) 00:13:43.388 fused_ordering(750) 00:13:43.388 fused_ordering(751) 00:13:43.388 fused_ordering(752) 00:13:43.388 fused_ordering(753) 00:13:43.388 fused_ordering(754) 00:13:43.388 fused_ordering(755) 00:13:43.388 fused_ordering(756) 00:13:43.388 fused_ordering(757) 00:13:43.388 fused_ordering(758) 00:13:43.388 fused_ordering(759) 00:13:43.388 fused_ordering(760) 00:13:43.388 fused_ordering(761) 00:13:43.388 fused_ordering(762) 00:13:43.388 fused_ordering(763) 00:13:43.388 fused_ordering(764) 00:13:43.388 fused_ordering(765) 00:13:43.388 fused_ordering(766) 00:13:43.388 fused_ordering(767) 00:13:43.388 fused_ordering(768) 00:13:43.388 fused_ordering(769) 00:13:43.388 fused_ordering(770) 00:13:43.388 fused_ordering(771) 00:13:43.388 fused_ordering(772) 00:13:43.388 fused_ordering(773) 00:13:43.388 fused_ordering(774) 00:13:43.388 fused_ordering(775) 00:13:43.388 fused_ordering(776) 00:13:43.388 fused_ordering(777) 00:13:43.388 fused_ordering(778) 00:13:43.388 fused_ordering(779) 00:13:43.388 fused_ordering(780) 00:13:43.388 fused_ordering(781) 00:13:43.388 fused_ordering(782) 00:13:43.388 fused_ordering(783) 00:13:43.389 fused_ordering(784) 00:13:43.389 fused_ordering(785) 00:13:43.389 fused_ordering(786) 00:13:43.389 fused_ordering(787) 00:13:43.389 fused_ordering(788) 00:13:43.389 fused_ordering(789) 00:13:43.389 fused_ordering(790) 00:13:43.389 fused_ordering(791) 00:13:43.389 fused_ordering(792) 00:13:43.389 fused_ordering(793) 00:13:43.389 fused_ordering(794) 00:13:43.389 fused_ordering(795) 00:13:43.389 fused_ordering(796) 00:13:43.389 fused_ordering(797) 00:13:43.389 fused_ordering(798) 00:13:43.389 fused_ordering(799) 00:13:43.389 fused_ordering(800) 00:13:43.389 fused_ordering(801) 00:13:43.389 fused_ordering(802) 00:13:43.389 fused_ordering(803) 00:13:43.389 fused_ordering(804) 00:13:43.389 fused_ordering(805) 00:13:43.389 fused_ordering(806) 00:13:43.389 fused_ordering(807) 00:13:43.389 fused_ordering(808) 00:13:43.389 fused_ordering(809) 00:13:43.389 fused_ordering(810) 00:13:43.389 fused_ordering(811) 00:13:43.389 fused_ordering(812) 00:13:43.389 fused_ordering(813) 00:13:43.389 fused_ordering(814) 00:13:43.389 fused_ordering(815) 00:13:43.389 fused_ordering(816) 00:13:43.389 fused_ordering(817) 00:13:43.389 fused_ordering(818) 00:13:43.389 fused_ordering(819) 00:13:43.389 fused_ordering(820) 00:13:43.957 fused_ordering(821) 00:13:43.957 fused_ordering(822) 00:13:43.957 fused_ordering(823) 00:13:43.957 fused_ordering(824) 00:13:43.957 fused_ordering(825) 00:13:43.957 fused_ordering(826) 00:13:43.957 fused_ordering(827) 00:13:43.958 fused_ordering(828) 00:13:43.958 fused_ordering(829) 00:13:43.958 fused_ordering(830) 00:13:43.958 fused_ordering(831) 00:13:43.958 fused_ordering(832) 00:13:43.958 fused_ordering(833) 00:13:43.958 fused_ordering(834) 00:13:43.958 fused_ordering(835) 00:13:43.958 fused_ordering(836) 00:13:43.958 fused_ordering(837) 00:13:43.958 fused_ordering(838) 00:13:43.958 fused_ordering(839) 00:13:43.958 fused_ordering(840) 00:13:43.958 fused_ordering(841) 00:13:43.958 fused_ordering(842) 00:13:43.958 fused_ordering(843) 00:13:43.958 fused_ordering(844) 00:13:43.958 fused_ordering(845) 00:13:43.958 fused_ordering(846) 00:13:43.958 fused_ordering(847) 00:13:43.958 fused_ordering(848) 00:13:43.958 fused_ordering(849) 00:13:43.958 fused_ordering(850) 00:13:43.958 fused_ordering(851) 00:13:43.958 fused_ordering(852) 00:13:43.958 fused_ordering(853) 00:13:43.958 fused_ordering(854) 00:13:43.958 fused_ordering(855) 00:13:43.958 fused_ordering(856) 00:13:43.958 fused_ordering(857) 00:13:43.958 fused_ordering(858) 00:13:43.958 fused_ordering(859) 00:13:43.958 fused_ordering(860) 00:13:43.958 fused_ordering(861) 00:13:43.958 fused_ordering(862) 00:13:43.958 fused_ordering(863) 00:13:43.958 fused_ordering(864) 00:13:43.958 fused_ordering(865) 00:13:43.958 fused_ordering(866) 00:13:43.958 fused_ordering(867) 00:13:43.958 fused_ordering(868) 00:13:43.958 fused_ordering(869) 00:13:43.958 fused_ordering(870) 00:13:43.958 fused_ordering(871) 00:13:43.958 fused_ordering(872) 00:13:43.958 fused_ordering(873) 00:13:43.958 fused_ordering(874) 00:13:43.958 fused_ordering(875) 00:13:43.958 fused_ordering(876) 00:13:43.958 fused_ordering(877) 00:13:43.958 fused_ordering(878) 00:13:43.958 fused_ordering(879) 00:13:43.958 fused_ordering(880) 00:13:43.958 fused_ordering(881) 00:13:43.958 fused_ordering(882) 00:13:43.958 fused_ordering(883) 00:13:43.958 fused_ordering(884) 00:13:43.958 fused_ordering(885) 00:13:43.958 fused_ordering(886) 00:13:43.958 fused_ordering(887) 00:13:43.958 fused_ordering(888) 00:13:43.958 fused_ordering(889) 00:13:43.958 fused_ordering(890) 00:13:43.958 fused_ordering(891) 00:13:43.958 fused_ordering(892) 00:13:43.958 fused_ordering(893) 00:13:43.958 fused_ordering(894) 00:13:43.958 fused_ordering(895) 00:13:43.958 fused_ordering(896) 00:13:43.958 fused_ordering(897) 00:13:43.958 fused_ordering(898) 00:13:43.958 fused_ordering(899) 00:13:43.958 fused_ordering(900) 00:13:43.958 fused_ordering(901) 00:13:43.958 fused_ordering(902) 00:13:43.958 fused_ordering(903) 00:13:43.958 fused_ordering(904) 00:13:43.958 fused_ordering(905) 00:13:43.958 fused_ordering(906) 00:13:43.958 fused_ordering(907) 00:13:43.958 fused_ordering(908) 00:13:43.958 fused_ordering(909) 00:13:43.958 fused_ordering(910) 00:13:43.958 fused_ordering(911) 00:13:43.958 fused_ordering(912) 00:13:43.958 fused_ordering(913) 00:13:43.958 fused_ordering(914) 00:13:43.958 fused_ordering(915) 00:13:43.958 fused_ordering(916) 00:13:43.958 fused_ordering(917) 00:13:43.958 fused_ordering(918) 00:13:43.958 fused_ordering(919) 00:13:43.958 fused_ordering(920) 00:13:43.958 fused_ordering(921) 00:13:43.958 fused_ordering(922) 00:13:43.958 fused_ordering(923) 00:13:43.958 fused_ordering(924) 00:13:43.958 fused_ordering(925) 00:13:43.958 fused_ordering(926) 00:13:43.958 fused_ordering(927) 00:13:43.958 fused_ordering(928) 00:13:43.958 fused_ordering(929) 00:13:43.958 fused_ordering(930) 00:13:43.958 fused_ordering(931) 00:13:43.958 fused_ordering(932) 00:13:43.958 fused_ordering(933) 00:13:43.958 fused_ordering(934) 00:13:43.958 fused_ordering(935) 00:13:43.958 fused_ordering(936) 00:13:43.958 fused_ordering(937) 00:13:43.958 fused_ordering(938) 00:13:43.958 fused_ordering(939) 00:13:43.958 fused_ordering(940) 00:13:43.958 fused_ordering(941) 00:13:43.958 fused_ordering(942) 00:13:43.958 fused_ordering(943) 00:13:43.958 fused_ordering(944) 00:13:43.958 fused_ordering(945) 00:13:43.958 fused_ordering(946) 00:13:43.958 fused_ordering(947) 00:13:43.958 fused_ordering(948) 00:13:43.958 fused_ordering(949) 00:13:43.958 fused_ordering(950) 00:13:43.958 fused_ordering(951) 00:13:43.958 fused_ordering(952) 00:13:43.958 fused_ordering(953) 00:13:43.958 fused_ordering(954) 00:13:43.958 fused_ordering(955) 00:13:43.958 fused_ordering(956) 00:13:43.958 fused_ordering(957) 00:13:43.958 fused_ordering(958) 00:13:43.958 fused_ordering(959) 00:13:43.958 fused_ordering(960) 00:13:43.958 fused_ordering(961) 00:13:43.958 fused_ordering(962) 00:13:43.958 fused_ordering(963) 00:13:43.958 fused_ordering(964) 00:13:43.958 fused_ordering(965) 00:13:43.958 fused_ordering(966) 00:13:43.958 fused_ordering(967) 00:13:43.958 fused_ordering(968) 00:13:43.958 fused_ordering(969) 00:13:43.958 fused_ordering(970) 00:13:43.958 fused_ordering(971) 00:13:43.958 fused_ordering(972) 00:13:43.958 fused_ordering(973) 00:13:43.958 fused_ordering(974) 00:13:43.958 fused_ordering(975) 00:13:43.958 fused_ordering(976) 00:13:43.958 fused_ordering(977) 00:13:43.958 fused_ordering(978) 00:13:43.958 fused_ordering(979) 00:13:43.958 fused_ordering(980) 00:13:43.958 fused_ordering(981) 00:13:43.958 fused_ordering(982) 00:13:43.958 fused_ordering(983) 00:13:43.958 fused_ordering(984) 00:13:43.958 fused_ordering(985) 00:13:43.958 fused_ordering(986) 00:13:43.958 fused_ordering(987) 00:13:43.958 fused_ordering(988) 00:13:43.958 fused_ordering(989) 00:13:43.958 fused_ordering(990) 00:13:43.958 fused_ordering(991) 00:13:43.958 fused_ordering(992) 00:13:43.958 fused_ordering(993) 00:13:43.958 fused_ordering(994) 00:13:43.958 fused_ordering(995) 00:13:43.958 fused_ordering(996) 00:13:43.958 fused_ordering(997) 00:13:43.958 fused_ordering(998) 00:13:43.958 fused_ordering(999) 00:13:43.958 fused_ordering(1000) 00:13:43.958 fused_ordering(1001) 00:13:43.958 fused_ordering(1002) 00:13:43.958 fused_ordering(1003) 00:13:43.958 fused_ordering(1004) 00:13:43.958 fused_ordering(1005) 00:13:43.958 fused_ordering(1006) 00:13:43.958 fused_ordering(1007) 00:13:43.958 fused_ordering(1008) 00:13:43.958 fused_ordering(1009) 00:13:43.958 fused_ordering(1010) 00:13:43.958 fused_ordering(1011) 00:13:43.958 fused_ordering(1012) 00:13:43.958 fused_ordering(1013) 00:13:43.958 fused_ordering(1014) 00:13:43.958 fused_ordering(1015) 00:13:43.958 fused_ordering(1016) 00:13:43.958 fused_ordering(1017) 00:13:43.958 fused_ordering(1018) 00:13:43.958 fused_ordering(1019) 00:13:43.958 fused_ordering(1020) 00:13:43.958 fused_ordering(1021) 00:13:43.958 fused_ordering(1022) 00:13:43.958 fused_ordering(1023) 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.958 rmmod nvme_tcp 00:13:43.958 rmmod nvme_fabrics 00:13:43.958 rmmod nvme_keyring 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1547434 ']' 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1547434 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1547434 ']' 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1547434 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.958 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547434 00:13:44.216 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.216 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.216 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547434' 00:13:44.216 killing process with pid 1547434 00:13:44.216 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1547434 00:13:44.216 19:19:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1547434 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.216 19:19:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.217 19:19:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.748 19:19:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.748 00:13:46.748 real 0m9.693s 00:13:46.748 user 0m4.545s 00:13:46.748 sys 0m5.236s 00:13:46.748 19:19:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.748 19:19:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.748 ************************************ 00:13:46.748 END TEST nvmf_fused_ordering 00:13:46.748 ************************************ 00:13:46.748 19:19:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:46.748 19:19:57 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:46.748 19:19:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:46.748 19:19:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.748 19:19:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.748 ************************************ 00:13:46.748 START TEST nvmf_delete_subsystem 00:13:46.748 ************************************ 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:46.748 * Looking for test storage... 00:13:46.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.748 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.749 19:19:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.017 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:52.018 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:52.018 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:52.018 Found net devices under 0000:86:00.0: cvl_0_0 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:52.018 Found net devices under 0000:86:00.1: cvl_0_1 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:52.018 00:13:52.018 --- 10.0.0.2 ping statistics --- 00:13:52.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.018 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:13:52.018 00:13:52.018 --- 10.0.0.1 ping statistics --- 00:13:52.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.018 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1551211 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1551211 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1551211 ']' 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.018 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.018 [2024-07-15 19:20:02.791313] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:13:52.018 [2024-07-15 19:20:02.791358] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.018 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.018 [2024-07-15 19:20:02.821160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:52.018 [2024-07-15 19:20:02.849432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:52.277 [2024-07-15 19:20:02.890318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.277 [2024-07-15 19:20:02.890356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.277 [2024-07-15 19:20:02.890363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.277 [2024-07-15 19:20:02.890368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.277 [2024-07-15 19:20:02.890373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.277 [2024-07-15 19:20:02.890418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.277 [2024-07-15 19:20:02.890421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.277 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.277 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:52.277 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.277 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.277 19:20:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.277 [2024-07-15 19:20:03.018598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.277 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 [2024-07-15 19:20:03.034746] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 NULL1 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 Delay0 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1551233 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:52.278 19:20:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:52.278 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.278 [2024-07-15 19:20:03.109259] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:54.817 19:20:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.818 19:20:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.818 19:20:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 [2024-07-15 19:20:05.190360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6884000c00 is same with the state(5) to be set 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 starting I/O failed: -6 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Write completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:54.818 Read completed with error (sct=0, sc=8) 00:13:55.387 [2024-07-15 19:20:06.163187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d2e0 is same with the state(5) to be set 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 [2024-07-15 19:20:06.191148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f688400d2f0 is same with the state(5) to be set 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 Read completed with error (sct=0, sc=8) 00:13:55.387 Write completed with error (sct=0, sc=8) 00:13:55.387 [2024-07-15 19:20:06.192776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4d20 is same with the state(5) to be set 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 [2024-07-15 19:20:06.192945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4960 is same with the state(5) to be set 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Write completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 Read completed with error (sct=0, sc=8) 00:13:55.388 [2024-07-15 19:20:06.193099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d4c0 is same with the state(5) to be set 00:13:55.388 Initializing NVMe Controllers 00:13:55.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.388 Controller IO queue size 128, less than required. 00:13:55.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:55.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:55.388 Initialization complete. Launching workers. 00:13:55.388 ======================================================== 00:13:55.388 Latency(us) 00:13:55.388 Device Information : IOPS MiB/s Average min max 00:13:55.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.58 0.10 943806.12 671.83 1012141.16 00:13:55.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.90 0.07 891598.63 235.07 1011648.78 00:13:55.388 ======================================================== 00:13:55.388 Total : 347.49 0.17 920983.99 235.07 1012141.16 00:13:55.388 00:13:55.388 [2024-07-15 19:20:06.193608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d2e0 (9): Bad file descriptor 00:13:55.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:55.388 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.388 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:55.388 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1551233 00:13:55.388 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1551233 00:13:55.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1551233) - No such process 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1551233 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1551233 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1551233 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.955 [2024-07-15 19:20:06.720985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1551922 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:55.955 19:20:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:55.955 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.955 [2024-07-15 19:20:06.782387] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:56.522 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:56.522 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:56.522 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:57.089 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.089 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:57.089 19:20:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:57.657 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.657 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:57.657 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:57.915 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.915 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:57.915 19:20:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.520 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.520 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:58.520 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.087 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.087 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:59.087 19:20:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.346 Initializing NVMe Controllers 00:13:59.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.346 Controller IO queue size 128, less than required. 00:13:59.346 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:59.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:59.346 Initialization complete. Launching workers. 00:13:59.346 ======================================================== 00:13:59.346 Latency(us) 00:13:59.346 Device Information : IOPS MiB/s Average min max 00:13:59.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003245.25 1000162.05 1042175.30 00:13:59.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005277.18 1000291.25 1042274.95 00:13:59.346 ======================================================== 00:13:59.346 Total : 256.00 0.12 1004261.22 1000162.05 1042274.95 00:13:59.346 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1551922 00:13:59.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1551922) - No such process 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1551922 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.605 rmmod nvme_tcp 00:13:59.605 rmmod nvme_fabrics 00:13:59.605 rmmod nvme_keyring 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1551211 ']' 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1551211 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1551211 ']' 00:13:59.605 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1551211 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1551211 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1551211' 00:13:59.606 killing process with pid 1551211 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1551211 00:13:59.606 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1551211 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.865 19:20:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.769 19:20:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.028 00:14:02.028 real 0m15.487s 00:14:02.028 user 0m28.948s 00:14:02.028 sys 0m4.897s 00:14:02.028 19:20:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.028 19:20:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.028 ************************************ 00:14:02.028 END TEST nvmf_delete_subsystem 00:14:02.028 ************************************ 00:14:02.028 19:20:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.028 19:20:12 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.028 19:20:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.028 19:20:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.028 19:20:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.028 ************************************ 00:14:02.028 START TEST nvmf_ns_masking 00:14:02.028 ************************************ 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:02.028 * Looking for test storage... 00:14:02.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.028 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7c2a5b4a-b2d1-4898-a958-8a5d5c3b28a7 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=93dfb335-436f-42f3-bb57-df88ce746976 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ad956f4b-bc53-436c-8ed3-9a5e0af1816a 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.029 19:20:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.302 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:07.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:07.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:07.303 Found net devices under 0000:86:00.0: cvl_0_0 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:07.303 Found net devices under 0000:86:00.1: cvl_0_1 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.303 19:20:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:14:07.303 00:14:07.303 --- 10.0.0.2 ping statistics --- 00:14:07.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.303 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:07.303 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:14:07.562 00:14:07.562 --- 10.0.0.1 ping statistics --- 00:14:07.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.562 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1555914 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1555914 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1555914 ']' 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.562 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.562 [2024-07-15 19:20:18.244889] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:07.562 [2024-07-15 19:20:18.244937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.562 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.562 [2024-07-15 19:20:18.274521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:07.562 [2024-07-15 19:20:18.302421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.562 [2024-07-15 19:20:18.343005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.562 [2024-07-15 19:20:18.343043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.562 [2024-07-15 19:20:18.343051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.562 [2024-07-15 19:20:18.343057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.562 [2024-07-15 19:20:18.343062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.562 [2024-07-15 19:20:18.343080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:07.821 [2024-07-15 19:20:18.616403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:07.821 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:08.079 Malloc1 00:14:08.079 19:20:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:08.338 Malloc2 00:14:08.338 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:08.596 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:08.596 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.855 [2024-07-15 19:20:19.523794] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.855 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:08.855 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ad956f4b-bc53-436c-8ed3-9a5e0af1816a -a 10.0.0.2 -s 4420 -i 4 00:14:09.114 19:20:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.114 19:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.114 19:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.114 19:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:09.114 19:20:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.018 [ 0]:0x1 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d28e4f6bc7284da09443ebfc644a45b2 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d28e4f6bc7284da09443ebfc644a45b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.018 19:20:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.276 [ 0]:0x1 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d28e4f6bc7284da09443ebfc644a45b2 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d28e4f6bc7284da09443ebfc644a45b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.276 [ 1]:0x2 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.276 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.534 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:11.534 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.534 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:11.534 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.794 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.794 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:12.053 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:12.053 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ad956f4b-bc53-436c-8ed3-9a5e0af1816a -a 10.0.0.2 -s 4420 -i 4 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:12.311 19:20:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:14.213 19:20:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.213 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.472 [ 0]:0x2 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.472 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.731 [ 0]:0x1 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d28e4f6bc7284da09443ebfc644a45b2 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d28e4f6bc7284da09443ebfc644a45b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.731 [ 1]:0x2 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.731 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.990 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.991 [ 0]:0x2 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.991 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.250 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:15.250 19:20:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ad956f4b-bc53-436c-8ed3-9a5e0af1816a -a 10.0.0.2 -s 4420 -i 4 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:15.250 19:20:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.846 [ 0]:0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d28e4f6bc7284da09443ebfc644a45b2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d28e4f6bc7284da09443ebfc644a45b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.846 [ 1]:0x2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.846 [ 0]:0x2 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.846 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:17.847 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:17.847 [2024-07-15 19:20:28.690041] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:17.847 request: 00:14:17.847 { 00:14:17.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.847 "nsid": 2, 00:14:17.847 "host": "nqn.2016-06.io.spdk:host1", 00:14:17.847 "method": "nvmf_ns_remove_host", 00:14:17.847 "req_id": 1 00:14:17.847 } 00:14:17.847 Got JSON-RPC error response 00:14:17.847 response: 00:14:17.847 { 00:14:17.847 "code": -32602, 00:14:17.847 "message": "Invalid parameters" 00:14:17.847 } 00:14:18.105 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:18.105 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.106 [ 0]:0x2 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c0ba34c795834d5bbabaf6c66cc1e5a6 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c0ba34c795834d5bbabaf6c66cc1e5a6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:18.106 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1557912 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1557912 /var/tmp/host.sock 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1557912 ']' 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:18.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.364 19:20:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.364 [2024-07-15 19:20:29.041074] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:18.364 [2024-07-15 19:20:29.041122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557912 ] 00:14:18.364 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.364 [2024-07-15 19:20:29.067527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:18.364 [2024-07-15 19:20:29.095545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.364 [2024-07-15 19:20:29.134776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.624 19:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.624 19:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:18.624 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.624 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.882 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7c2a5b4a-b2d1-4898-a958-8a5d5c3b28a7 00:14:18.882 19:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:18.882 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7C2A5B4AB2D14898A9588A5D5C3B28A7 -i 00:14:19.141 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 93dfb335-436f-42f3-bb57-df88ce746976 00:14:19.141 19:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:19.141 19:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 93DFB335436F42F3BB57DF88CE746976 -i 00:14:19.400 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.400 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:19.659 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:19.659 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:19.918 nvme0n1 00:14:19.918 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:19.918 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:20.176 nvme1n2 00:14:20.176 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:20.176 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:20.176 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:20.176 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:20.176 19:20:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:20.176 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:20.176 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:20.176 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:20.176 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:20.435 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7c2a5b4a-b2d1-4898-a958-8a5d5c3b28a7 == \7\c\2\a\5\b\4\a\-\b\2\d\1\-\4\8\9\8\-\a\9\5\8\-\8\a\5\d\5\c\3\b\2\8\a\7 ]] 00:14:20.435 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:20.435 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:20.435 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 93dfb335-436f-42f3-bb57-df88ce746976 == \9\3\d\f\b\3\3\5\-\4\3\6\f\-\4\2\f\3\-\b\b\5\7\-\d\f\8\8\c\e\7\4\6\9\7\6 ]] 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1557912 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1557912 ']' 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1557912 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557912 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557912' 00:14:20.693 killing process with pid 1557912 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1557912 00:14:20.693 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1557912 00:14:20.952 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.211 rmmod nvme_tcp 00:14:21.211 rmmod nvme_fabrics 00:14:21.211 rmmod nvme_keyring 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1555914 ']' 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1555914 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1555914 ']' 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1555914 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.211 19:20:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1555914 00:14:21.211 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:21.211 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:21.211 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1555914' 00:14:21.211 killing process with pid 1555914 00:14:21.211 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1555914 00:14:21.211 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1555914 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.469 19:20:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.002 19:20:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.002 00:14:24.002 real 0m21.598s 00:14:24.002 user 0m22.434s 00:14:24.002 sys 0m5.995s 00:14:24.002 19:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.002 19:20:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 ************************************ 00:14:24.002 END TEST nvmf_ns_masking 00:14:24.002 ************************************ 00:14:24.002 19:20:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.002 19:20:34 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:24.002 19:20:34 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:24.002 19:20:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.002 19:20:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.002 19:20:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 ************************************ 00:14:24.002 START TEST nvmf_nvme_cli 00:14:24.002 ************************************ 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:24.002 * Looking for test storage... 00:14:24.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.002 19:20:34 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.003 19:20:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:29.278 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:29.278 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:29.278 Found net devices under 0000:86:00.0: cvl_0_0 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:29.278 Found net devices under 0000:86:00.1: cvl_0_1 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.278 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:14:29.279 00:14:29.279 --- 10.0.0.2 ping statistics --- 00:14:29.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.279 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:29.279 00:14:29.279 --- 10.0.0.1 ping statistics --- 00:14:29.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.279 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1561887 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1561887 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1561887 ']' 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 [2024-07-15 19:20:39.692164] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:29.279 [2024-07-15 19:20:39.692204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.279 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.279 [2024-07-15 19:20:39.723070] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:29.279 [2024-07-15 19:20:39.751040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.279 [2024-07-15 19:20:39.792846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.279 [2024-07-15 19:20:39.792886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.279 [2024-07-15 19:20:39.792893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.279 [2024-07-15 19:20:39.792899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.279 [2024-07-15 19:20:39.792904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.279 [2024-07-15 19:20:39.792949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.279 [2024-07-15 19:20:39.793047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.279 [2024-07-15 19:20:39.793132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.279 [2024-07-15 19:20:39.793133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 [2024-07-15 19:20:39.940185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 Malloc0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 Malloc1 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 [2024-07-15 19:20:40.017574] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.279 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:29.539 00:14:29.539 Discovery Log Number of Records 2, Generation counter 2 00:14:29.539 =====Discovery Log Entry 0====== 00:14:29.539 trtype: tcp 00:14:29.539 adrfam: ipv4 00:14:29.539 subtype: current discovery subsystem 00:14:29.539 treq: not required 00:14:29.539 portid: 0 00:14:29.539 trsvcid: 4420 00:14:29.539 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.539 traddr: 10.0.0.2 00:14:29.539 eflags: explicit discovery connections, duplicate discovery information 00:14:29.539 sectype: none 00:14:29.539 =====Discovery Log Entry 1====== 00:14:29.539 trtype: tcp 00:14:29.539 adrfam: ipv4 00:14:29.539 subtype: nvme subsystem 00:14:29.539 treq: not required 00:14:29.539 portid: 0 00:14:29.539 trsvcid: 4420 00:14:29.539 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:29.539 traddr: 10.0.0.2 00:14:29.539 eflags: none 00:14:29.539 sectype: none 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:29.539 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:29.540 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.540 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.540 19:20:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.540 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:29.540 19:20:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:30.919 19:20:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:32.824 /dev/nvme0n1 ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.824 rmmod nvme_tcp 00:14:32.824 rmmod nvme_fabrics 00:14:32.824 rmmod nvme_keyring 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.824 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1561887 ']' 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1561887 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1561887 ']' 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1561887 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1561887 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1561887' 00:14:32.825 killing process with pid 1561887 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1561887 00:14:32.825 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1561887 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.083 19:20:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.616 19:20:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.616 00:14:35.616 real 0m11.558s 00:14:35.616 user 0m17.379s 00:14:35.616 sys 0m4.429s 00:14:35.617 19:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.617 19:20:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 ************************************ 00:14:35.617 END TEST nvmf_nvme_cli 00:14:35.617 ************************************ 00:14:35.617 19:20:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.617 19:20:45 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:35.617 19:20:45 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.617 19:20:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.617 19:20:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.617 19:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 ************************************ 00:14:35.617 START TEST nvmf_vfio_user 00:14:35.617 ************************************ 00:14:35.617 19:20:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.617 * Looking for test storage... 00:14:35.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1562990 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1562990' 00:14:35.617 Process pid: 1562990 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1562990 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1562990 ']' 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:35.617 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:35.617 [2024-07-15 19:20:46.150894] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:35.617 [2024-07-15 19:20:46.150940] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.617 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.617 [2024-07-15 19:20:46.178102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:35.617 [2024-07-15 19:20:46.206519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.617 [2024-07-15 19:20:46.248331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.617 [2024-07-15 19:20:46.248368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.617 [2024-07-15 19:20:46.248375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.617 [2024-07-15 19:20:46.248381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.617 [2024-07-15 19:20:46.248386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.617 [2024-07-15 19:20:46.248434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.617 [2024-07-15 19:20:46.248530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.617 [2024-07-15 19:20:46.248618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.617 [2024-07-15 19:20:46.248619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.618 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.618 19:20:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:35.618 19:20:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:36.552 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:36.811 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:36.811 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:36.811 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.811 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:36.811 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:37.070 Malloc1 00:14:37.070 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:37.329 19:20:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:37.329 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:37.630 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.630 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:37.630 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:37.630 Malloc2 00:14:37.889 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:37.889 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:38.149 19:20:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:38.409 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:38.409 [2024-07-15 19:20:49.084210] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:38.409 [2024-07-15 19:20:49.084248] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563477 ] 00:14:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.409 [2024-07-15 19:20:49.096752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:38.409 [2024-07-15 19:20:49.112778] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:38.409 [2024-07-15 19:20:49.120516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:38.409 [2024-07-15 19:20:49.120536] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa63b234000 00:14:38.409 [2024-07-15 19:20:49.121514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.122517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.123525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.124530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.125537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.126543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.127548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.128558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.409 [2024-07-15 19:20:49.129566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:38.409 [2024-07-15 19:20:49.129577] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa6397f5000 00:14:38.409 [2024-07-15 19:20:49.130521] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:38.409 [2024-07-15 19:20:49.141679] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:38.410 [2024-07-15 19:20:49.141703] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:38.410 [2024-07-15 19:20:49.146677] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:38.410 [2024-07-15 19:20:49.146719] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:38.410 [2024-07-15 19:20:49.146787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:38.410 [2024-07-15 19:20:49.146803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:38.410 [2024-07-15 19:20:49.146809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:38.410 [2024-07-15 19:20:49.147673] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:38.410 [2024-07-15 19:20:49.147683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:38.410 [2024-07-15 19:20:49.147689] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:38.410 [2024-07-15 19:20:49.148677] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:38.410 [2024-07-15 19:20:49.148687] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:38.410 [2024-07-15 19:20:49.148693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.149688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:38.410 [2024-07-15 19:20:49.149696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.150693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:38.410 [2024-07-15 19:20:49.150701] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:38.410 [2024-07-15 19:20:49.150706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.150711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.150816] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:38.410 [2024-07-15 19:20:49.150820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.150825] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:38.410 [2024-07-15 19:20:49.151699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:38.410 [2024-07-15 19:20:49.152706] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:38.410 [2024-07-15 19:20:49.153708] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:38.410 [2024-07-15 19:20:49.154714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.410 [2024-07-15 19:20:49.156235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:38.410 [2024-07-15 19:20:49.156731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:38.410 [2024-07-15 19:20:49.156738] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:38.410 [2024-07-15 19:20:49.156743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:38.410 [2024-07-15 19:20:49.156770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156783] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:38.410 [2024-07-15 19:20:49.156787] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:38.410 [2024-07-15 19:20:49.156799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.156841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.156850] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:38.410 [2024-07-15 19:20:49.156856] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:38.410 [2024-07-15 19:20:49.156860] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:38.410 [2024-07-15 19:20:49.156864] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:38.410 [2024-07-15 19:20:49.156868] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:38.410 [2024-07-15 19:20:49.156872] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:38.410 [2024-07-15 19:20:49.156876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.156917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.410 [2024-07-15 19:20:49.156925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.410 [2024-07-15 19:20:49.156933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.410 [2024-07-15 19:20:49.156940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.410 [2024-07-15 19:20:49.156944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.156967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.156972] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:38.410 [2024-07-15 19:20:49.156976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.156995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.157006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.157057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:38.410 [2024-07-15 19:20:49.157076] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:38.410 [2024-07-15 19:20:49.157081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.157093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.157100] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:38.410 [2024-07-15 19:20:49.157108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157120] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:38.410 [2024-07-15 19:20:49.157124] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:38.410 [2024-07-15 19:20:49.157129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.157150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.157161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:38.410 [2024-07-15 19:20:49.157177] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:38.410 [2024-07-15 19:20:49.157183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:38.410 [2024-07-15 19:20:49.157192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:38.410 [2024-07-15 19:20:49.157200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:38.410 [2024-07-15 19:20:49.157232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:38.411 [2024-07-15 19:20:49.157238] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:38.411 [2024-07-15 19:20:49.157242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:38.411 [2024-07-15 19:20:49.157247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:38.411 [2024-07-15 19:20:49.157263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157348] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:38.411 [2024-07-15 19:20:49.157352] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:38.411 [2024-07-15 19:20:49.157355] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:38.411 [2024-07-15 19:20:49.157359] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:38.411 [2024-07-15 19:20:49.157365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:38.411 [2024-07-15 19:20:49.157371] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:38.411 [2024-07-15 19:20:49.157375] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:38.411 [2024-07-15 19:20:49.157380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157386] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:38.411 [2024-07-15 19:20:49.157390] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:38.411 [2024-07-15 19:20:49.157395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157402] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:38.411 [2024-07-15 19:20:49.157406] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:38.411 [2024-07-15 19:20:49.157411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:38.411 [2024-07-15 19:20:49.157420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:38.411 [2024-07-15 19:20:49.157449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:38.411 ===================================================== 00:14:38.411 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:38.411 ===================================================== 00:14:38.411 Controller Capabilities/Features 00:14:38.411 ================================ 00:14:38.411 Vendor ID: 4e58 00:14:38.411 Subsystem Vendor ID: 4e58 00:14:38.411 Serial Number: SPDK1 00:14:38.411 Model Number: SPDK bdev Controller 00:14:38.411 Firmware Version: 24.09 00:14:38.411 Recommended Arb Burst: 6 00:14:38.411 IEEE OUI Identifier: 8d 6b 50 00:14:38.411 Multi-path I/O 00:14:38.411 May have multiple subsystem ports: Yes 00:14:38.411 May have multiple controllers: Yes 00:14:38.411 Associated with SR-IOV VF: No 00:14:38.411 Max Data Transfer Size: 131072 00:14:38.411 Max Number of Namespaces: 32 00:14:38.411 Max Number of I/O Queues: 127 00:14:38.411 NVMe Specification Version (VS): 1.3 00:14:38.411 NVMe Specification Version (Identify): 1.3 00:14:38.411 Maximum Queue Entries: 256 00:14:38.411 Contiguous Queues Required: Yes 00:14:38.411 Arbitration Mechanisms Supported 00:14:38.411 Weighted Round Robin: Not Supported 00:14:38.411 Vendor Specific: Not Supported 00:14:38.411 Reset Timeout: 15000 ms 00:14:38.411 Doorbell Stride: 4 bytes 00:14:38.411 NVM Subsystem Reset: Not Supported 00:14:38.411 Command Sets Supported 00:14:38.411 NVM Command Set: Supported 00:14:38.411 Boot Partition: Not Supported 00:14:38.411 Memory Page Size Minimum: 4096 bytes 00:14:38.411 Memory Page Size Maximum: 4096 bytes 00:14:38.411 Persistent Memory Region: Not Supported 00:14:38.411 Optional Asynchronous Events Supported 00:14:38.411 Namespace Attribute Notices: Supported 00:14:38.411 Firmware Activation Notices: Not Supported 00:14:38.411 ANA Change Notices: Not Supported 00:14:38.411 PLE Aggregate Log Change Notices: Not Supported 00:14:38.411 LBA Status Info Alert Notices: Not Supported 00:14:38.411 EGE Aggregate Log Change Notices: Not Supported 00:14:38.411 Normal NVM Subsystem Shutdown event: Not Supported 00:14:38.411 Zone Descriptor Change Notices: Not Supported 00:14:38.411 Discovery Log Change Notices: Not Supported 00:14:38.411 Controller Attributes 00:14:38.411 128-bit Host Identifier: Supported 00:14:38.411 Non-Operational Permissive Mode: Not Supported 00:14:38.411 NVM Sets: Not Supported 00:14:38.411 Read Recovery Levels: Not Supported 00:14:38.411 Endurance Groups: Not Supported 00:14:38.411 Predictable Latency Mode: Not Supported 00:14:38.411 Traffic Based Keep ALive: Not Supported 00:14:38.411 Namespace Granularity: Not Supported 00:14:38.411 SQ Associations: Not Supported 00:14:38.411 UUID List: Not Supported 00:14:38.411 Multi-Domain Subsystem: Not Supported 00:14:38.411 Fixed Capacity Management: Not Supported 00:14:38.411 Variable Capacity Management: Not Supported 00:14:38.411 Delete Endurance Group: Not Supported 00:14:38.411 Delete NVM Set: Not Supported 00:14:38.411 Extended LBA Formats Supported: Not Supported 00:14:38.411 Flexible Data Placement Supported: Not Supported 00:14:38.411 00:14:38.411 Controller Memory Buffer Support 00:14:38.411 ================================ 00:14:38.411 Supported: No 00:14:38.411 00:14:38.411 Persistent Memory Region Support 00:14:38.411 ================================ 00:14:38.411 Supported: No 00:14:38.411 00:14:38.411 Admin Command Set Attributes 00:14:38.411 ============================ 00:14:38.411 Security Send/Receive: Not Supported 00:14:38.411 Format NVM: Not Supported 00:14:38.411 Firmware Activate/Download: Not Supported 00:14:38.411 Namespace Management: Not Supported 00:14:38.411 Device Self-Test: Not Supported 00:14:38.411 Directives: Not Supported 00:14:38.411 NVMe-MI: Not Supported 00:14:38.411 Virtualization Management: Not Supported 00:14:38.411 Doorbell Buffer Config: Not Supported 00:14:38.411 Get LBA Status Capability: Not Supported 00:14:38.411 Command & Feature Lockdown Capability: Not Supported 00:14:38.411 Abort Command Limit: 4 00:14:38.411 Async Event Request Limit: 4 00:14:38.411 Number of Firmware Slots: N/A 00:14:38.411 Firmware Slot 1 Read-Only: N/A 00:14:38.411 Firmware Activation Without Reset: N/A 00:14:38.411 Multiple Update Detection Support: N/A 00:14:38.411 Firmware Update Granularity: No Information Provided 00:14:38.411 Per-Namespace SMART Log: No 00:14:38.411 Asymmetric Namespace Access Log Page: Not Supported 00:14:38.411 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:38.411 Command Effects Log Page: Supported 00:14:38.411 Get Log Page Extended Data: Supported 00:14:38.411 Telemetry Log Pages: Not Supported 00:14:38.411 Persistent Event Log Pages: Not Supported 00:14:38.411 Supported Log Pages Log Page: May Support 00:14:38.411 Commands Supported & Effects Log Page: Not Supported 00:14:38.411 Feature Identifiers & Effects Log Page:May Support 00:14:38.411 NVMe-MI Commands & Effects Log Page: May Support 00:14:38.411 Data Area 4 for Telemetry Log: Not Supported 00:14:38.411 Error Log Page Entries Supported: 128 00:14:38.411 Keep Alive: Supported 00:14:38.411 Keep Alive Granularity: 10000 ms 00:14:38.411 00:14:38.411 NVM Command Set Attributes 00:14:38.411 ========================== 00:14:38.411 Submission Queue Entry Size 00:14:38.411 Max: 64 00:14:38.411 Min: 64 00:14:38.411 Completion Queue Entry Size 00:14:38.411 Max: 16 00:14:38.411 Min: 16 00:14:38.411 Number of Namespaces: 32 00:14:38.411 Compare Command: Supported 00:14:38.411 Write Uncorrectable Command: Not Supported 00:14:38.411 Dataset Management Command: Supported 00:14:38.411 Write Zeroes Command: Supported 00:14:38.411 Set Features Save Field: Not Supported 00:14:38.411 Reservations: Not Supported 00:14:38.411 Timestamp: Not Supported 00:14:38.411 Copy: Supported 00:14:38.411 Volatile Write Cache: Present 00:14:38.411 Atomic Write Unit (Normal): 1 00:14:38.411 Atomic Write Unit (PFail): 1 00:14:38.411 Atomic Compare & Write Unit: 1 00:14:38.411 Fused Compare & Write: Supported 00:14:38.411 Scatter-Gather List 00:14:38.411 SGL Command Set: Supported (Dword aligned) 00:14:38.411 SGL Keyed: Not Supported 00:14:38.411 SGL Bit Bucket Descriptor: Not Supported 00:14:38.411 SGL Metadata Pointer: Not Supported 00:14:38.411 Oversized SGL: Not Supported 00:14:38.411 SGL Metadata Address: Not Supported 00:14:38.411 SGL Offset: Not Supported 00:14:38.411 Transport SGL Data Block: Not Supported 00:14:38.411 Replay Protected Memory Block: Not Supported 00:14:38.412 00:14:38.412 Firmware Slot Information 00:14:38.412 ========================= 00:14:38.412 Active slot: 1 00:14:38.412 Slot 1 Firmware Revision: 24.09 00:14:38.412 00:14:38.412 00:14:38.412 Commands Supported and Effects 00:14:38.412 ============================== 00:14:38.412 Admin Commands 00:14:38.412 -------------- 00:14:38.412 Get Log Page (02h): Supported 00:14:38.412 Identify (06h): Supported 00:14:38.412 Abort (08h): Supported 00:14:38.412 Set Features (09h): Supported 00:14:38.412 Get Features (0Ah): Supported 00:14:38.412 Asynchronous Event Request (0Ch): Supported 00:14:38.412 Keep Alive (18h): Supported 00:14:38.412 I/O Commands 00:14:38.412 ------------ 00:14:38.412 Flush (00h): Supported LBA-Change 00:14:38.412 Write (01h): Supported LBA-Change 00:14:38.412 Read (02h): Supported 00:14:38.412 Compare (05h): Supported 00:14:38.412 Write Zeroes (08h): Supported LBA-Change 00:14:38.412 Dataset Management (09h): Supported LBA-Change 00:14:38.412 Copy (19h): Supported LBA-Change 00:14:38.412 00:14:38.412 Error Log 00:14:38.412 ========= 00:14:38.412 00:14:38.412 Arbitration 00:14:38.412 =========== 00:14:38.412 Arbitration Burst: 1 00:14:38.412 00:14:38.412 Power Management 00:14:38.412 ================ 00:14:38.412 Number of Power States: 1 00:14:38.412 Current Power State: Power State #0 00:14:38.412 Power State #0: 00:14:38.412 Max Power: 0.00 W 00:14:38.412 Non-Operational State: Operational 00:14:38.412 Entry Latency: Not Reported 00:14:38.412 Exit Latency: Not Reported 00:14:38.412 Relative Read Throughput: 0 00:14:38.412 Relative Read Latency: 0 00:14:38.412 Relative Write Throughput: 0 00:14:38.412 Relative Write Latency: 0 00:14:38.412 Idle Power: Not Reported 00:14:38.412 Active Power: Not Reported 00:14:38.412 Non-Operational Permissive Mode: Not Supported 00:14:38.412 00:14:38.412 Health Information 00:14:38.412 ================== 00:14:38.412 Critical Warnings: 00:14:38.412 Available Spare Space: OK 00:14:38.412 Temperature: OK 00:14:38.412 Device Reliability: OK 00:14:38.412 Read Only: No 00:14:38.412 Volatile Memory Backup: OK 00:14:38.412 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:38.412 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:38.412 Available Spare: 0% 00:14:38.412 Available Sp[2024-07-15 19:20:49.157543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:38.412 [2024-07-15 19:20:49.157553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:38.412 [2024-07-15 19:20:49.157580] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:38.412 [2024-07-15 19:20:49.157588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.412 [2024-07-15 19:20:49.157594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.412 [2024-07-15 19:20:49.157600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.412 [2024-07-15 19:20:49.157605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.412 [2024-07-15 19:20:49.157735] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:38.412 [2024-07-15 19:20:49.157745] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:38.412 [2024-07-15 19:20:49.158739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.412 [2024-07-15 19:20:49.158787] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:38.412 [2024-07-15 19:20:49.158793] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:38.412 [2024-07-15 19:20:49.159748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:38.412 [2024-07-15 19:20:49.159761] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:38.412 [2024-07-15 19:20:49.159810] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:38.412 [2024-07-15 19:20:49.165231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:38.412 are Threshold: 0% 00:14:38.412 Life Percentage Used: 0% 00:14:38.412 Data Units Read: 0 00:14:38.412 Data Units Written: 0 00:14:38.412 Host Read Commands: 0 00:14:38.412 Host Write Commands: 0 00:14:38.412 Controller Busy Time: 0 minutes 00:14:38.412 Power Cycles: 0 00:14:38.412 Power On Hours: 0 hours 00:14:38.412 Unsafe Shutdowns: 0 00:14:38.412 Unrecoverable Media Errors: 0 00:14:38.412 Lifetime Error Log Entries: 0 00:14:38.412 Warning Temperature Time: 0 minutes 00:14:38.412 Critical Temperature Time: 0 minutes 00:14:38.412 00:14:38.412 Number of Queues 00:14:38.412 ================ 00:14:38.412 Number of I/O Submission Queues: 127 00:14:38.412 Number of I/O Completion Queues: 127 00:14:38.412 00:14:38.412 Active Namespaces 00:14:38.412 ================= 00:14:38.412 Namespace ID:1 00:14:38.412 Error Recovery Timeout: Unlimited 00:14:38.412 Command Set Identifier: NVM (00h) 00:14:38.412 Deallocate: Supported 00:14:38.412 Deallocated/Unwritten Error: Not Supported 00:14:38.412 Deallocated Read Value: Unknown 00:14:38.412 Deallocate in Write Zeroes: Not Supported 00:14:38.412 Deallocated Guard Field: 0xFFFF 00:14:38.412 Flush: Supported 00:14:38.412 Reservation: Supported 00:14:38.412 Namespace Sharing Capabilities: Multiple Controllers 00:14:38.412 Size (in LBAs): 131072 (0GiB) 00:14:38.412 Capacity (in LBAs): 131072 (0GiB) 00:14:38.412 Utilization (in LBAs): 131072 (0GiB) 00:14:38.412 NGUID: 893E1C1A55404E2BB3C07A35464CE11E 00:14:38.412 UUID: 893e1c1a-5540-4e2b-b3c0-7a35464ce11e 00:14:38.412 Thin Provisioning: Not Supported 00:14:38.412 Per-NS Atomic Units: Yes 00:14:38.412 Atomic Boundary Size (Normal): 0 00:14:38.412 Atomic Boundary Size (PFail): 0 00:14:38.412 Atomic Boundary Offset: 0 00:14:38.412 Maximum Single Source Range Length: 65535 00:14:38.412 Maximum Copy Length: 65535 00:14:38.412 Maximum Source Range Count: 1 00:14:38.412 NGUID/EUI64 Never Reused: No 00:14:38.412 Namespace Write Protected: No 00:14:38.412 Number of LBA Formats: 1 00:14:38.412 Current LBA Format: LBA Format #00 00:14:38.412 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:38.412 00:14:38.412 19:20:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:38.412 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.672 [2024-07-15 19:20:49.379124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.944 Initializing NVMe Controllers 00:14:43.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:43.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:43.944 Initialization complete. Launching workers. 00:14:43.944 ======================================================== 00:14:43.944 Latency(us) 00:14:43.944 Device Information : IOPS MiB/s Average min max 00:14:43.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39926.18 155.96 3205.75 954.37 9297.82 00:14:43.944 ======================================================== 00:14:43.944 Total : 39926.18 155.96 3205.75 954.37 9297.82 00:14:43.944 00:14:43.944 [2024-07-15 19:20:54.398869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.944 19:20:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:43.944 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.944 [2024-07-15 19:20:54.614899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.217 Initializing NVMe Controllers 00:14:49.217 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.217 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:49.217 Initialization complete. Launching workers. 00:14:49.217 ======================================================== 00:14:49.217 Latency(us) 00:14:49.217 Device Information : IOPS MiB/s Average min max 00:14:49.217 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.09 62.70 7979.91 7776.57 8043.47 00:14:49.217 ======================================================== 00:14:49.217 Total : 16051.09 62.70 7979.91 7776.57 8043.47 00:14:49.217 00:14:49.217 [2024-07-15 19:20:59.656884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.217 19:20:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:49.217 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.217 [2024-07-15 19:20:59.842822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.481 [2024-07-15 19:21:04.924597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.481 Initializing NVMe Controllers 00:14:54.481 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:54.481 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:54.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:54.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:54.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:54.481 Initialization complete. Launching workers. 00:14:54.481 Starting thread on core 2 00:14:54.481 Starting thread on core 3 00:14:54.481 Starting thread on core 1 00:14:54.481 19:21:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:54.481 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.481 [2024-07-15 19:21:05.206628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.768 [2024-07-15 19:21:08.266871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.768 Initializing NVMe Controllers 00:14:57.768 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.768 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.768 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:57.768 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:57.768 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:57.768 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:57.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:57.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:57.768 Initialization complete. Launching workers. 00:14:57.768 Starting thread on core 1 with urgent priority queue 00:14:57.768 Starting thread on core 2 with urgent priority queue 00:14:57.768 Starting thread on core 3 with urgent priority queue 00:14:57.768 Starting thread on core 0 with urgent priority queue 00:14:57.768 SPDK bdev Controller (SPDK1 ) core 0: 7715.33 IO/s 12.96 secs/100000 ios 00:14:57.768 SPDK bdev Controller (SPDK1 ) core 1: 9157.33 IO/s 10.92 secs/100000 ios 00:14:57.768 SPDK bdev Controller (SPDK1 ) core 2: 8534.33 IO/s 11.72 secs/100000 ios 00:14:57.768 SPDK bdev Controller (SPDK1 ) core 3: 7997.33 IO/s 12.50 secs/100000 ios 00:14:57.768 ======================================================== 00:14:57.768 00:14:57.768 19:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:57.768 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.768 [2024-07-15 19:21:08.544447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.768 Initializing NVMe Controllers 00:14:57.768 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.768 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:57.768 Namespace ID: 1 size: 0GB 00:14:57.768 Initialization complete. 00:14:57.768 INFO: using host memory buffer for IO 00:14:57.768 Hello world! 00:14:57.768 [2024-07-15 19:21:08.579673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.768 19:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:58.025 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.025 [2024-07-15 19:21:08.851153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.419 Initializing NVMe Controllers 00:14:59.419 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.419 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.419 Initialization complete. Launching workers. 00:14:59.419 submit (in ns) avg, min, max = 7148.8, 3224.3, 3999618.3 00:14:59.419 complete (in ns) avg, min, max = 20615.7, 1769.6, 7985979.1 00:14:59.419 00:14:59.419 Submit histogram 00:14:59.419 ================ 00:14:59.419 Range in us Cumulative Count 00:14:59.419 3.214 - 3.228: 0.0062% ( 1) 00:14:59.419 3.228 - 3.242: 0.0246% ( 3) 00:14:59.419 3.242 - 3.256: 0.0308% ( 1) 00:14:59.419 3.256 - 3.270: 0.0431% ( 2) 00:14:59.419 3.270 - 3.283: 0.0739% ( 5) 00:14:59.419 3.283 - 3.297: 0.7572% ( 111) 00:14:59.419 3.297 - 3.311: 3.4780% ( 442) 00:14:59.419 3.311 - 3.325: 8.1564% ( 760) 00:14:59.419 3.325 - 3.339: 13.5118% ( 870) 00:14:59.419 3.339 - 3.353: 19.2859% ( 938) 00:14:59.419 3.353 - 3.367: 25.7433% ( 1049) 00:14:59.419 3.367 - 3.381: 31.1357% ( 876) 00:14:59.419 3.381 - 3.395: 36.7005% ( 904) 00:14:59.419 3.395 - 3.409: 41.7175% ( 815) 00:14:59.419 3.409 - 3.423: 46.3527% ( 753) 00:14:59.419 3.423 - 3.437: 50.6433% ( 697) 00:14:59.419 3.437 - 3.450: 55.7525% ( 830) 00:14:59.419 3.450 - 3.464: 62.5300% ( 1101) 00:14:59.419 3.464 - 3.478: 67.6701% ( 835) 00:14:59.419 3.478 - 3.492: 72.5331% ( 790) 00:14:59.419 3.492 - 3.506: 78.0671% ( 899) 00:14:59.419 3.506 - 3.520: 81.8159% ( 609) 00:14:59.419 3.520 - 3.534: 84.4321% ( 425) 00:14:59.419 3.534 - 3.548: 85.9095% ( 240) 00:14:59.419 3.548 - 3.562: 86.8821% ( 158) 00:14:59.419 3.562 - 3.590: 87.9286% ( 170) 00:14:59.419 3.590 - 3.617: 89.1351% ( 196) 00:14:59.419 3.617 - 3.645: 90.7910% ( 269) 00:14:59.419 3.645 - 3.673: 92.5454% ( 285) 00:14:59.419 3.673 - 3.701: 94.1459% ( 260) 00:14:59.419 3.701 - 3.729: 95.6664% ( 247) 00:14:59.419 3.729 - 3.757: 97.1930% ( 248) 00:14:59.419 3.757 - 3.784: 98.2395% ( 170) 00:14:59.419 3.784 - 3.812: 98.8920% ( 106) 00:14:59.419 3.812 - 3.840: 99.2798% ( 63) 00:14:59.419 3.840 - 3.868: 99.4952% ( 35) 00:14:59.419 3.868 - 3.896: 99.5876% ( 15) 00:14:59.419 3.896 - 3.923: 99.6307% ( 7) 00:14:59.419 3.923 - 3.951: 99.6368% ( 1) 00:14:59.419 3.951 - 3.979: 99.6430% ( 1) 00:14:59.419 3.979 - 4.007: 99.6491% ( 1) 00:14:59.419 4.230 - 4.257: 99.6553% ( 1) 00:14:59.419 5.537 - 5.565: 99.6614% ( 1) 00:14:59.419 5.649 - 5.677: 99.6676% ( 1) 00:14:59.419 5.677 - 5.704: 99.6737% ( 1) 00:14:59.419 5.732 - 5.760: 99.6799% ( 1) 00:14:59.419 5.788 - 5.816: 99.6861% ( 1) 00:14:59.419 5.871 - 5.899: 99.6922% ( 1) 00:14:59.419 5.899 - 5.927: 99.6984% ( 1) 00:14:59.419 5.927 - 5.955: 99.7045% ( 1) 00:14:59.419 5.955 - 5.983: 99.7107% ( 1) 00:14:59.419 6.010 - 6.038: 99.7168% ( 1) 00:14:59.419 6.038 - 6.066: 99.7230% ( 1) 00:14:59.419 6.066 - 6.094: 99.7291% ( 1) 00:14:59.419 6.150 - 6.177: 99.7353% ( 1) 00:14:59.419 6.205 - 6.233: 99.7476% ( 2) 00:14:59.419 6.233 - 6.261: 99.7538% ( 1) 00:14:59.419 6.344 - 6.372: 99.7599% ( 1) 00:14:59.419 6.400 - 6.428: 99.7661% ( 1) 00:14:59.419 6.595 - 6.623: 99.7784% ( 2) 00:14:59.419 6.650 - 6.678: 99.7845% ( 1) 00:14:59.419 6.678 - 6.706: 99.7907% ( 1) 00:14:59.419 6.734 - 6.762: 99.7969% ( 1) 00:14:59.419 6.790 - 6.817: 99.8030% ( 1) 00:14:59.419 6.873 - 6.901: 99.8092% ( 1) 00:14:59.419 6.901 - 6.929: 99.8153% ( 1) 00:14:59.420 7.012 - 7.040: 99.8215% ( 1) 00:14:59.420 7.040 - 7.068: 99.8276% ( 1) 00:14:59.420 7.123 - 7.179: 99.8461% ( 3) 00:14:59.420 7.290 - 7.346: 99.8646% ( 3) 00:14:59.420 7.569 - 7.624: 99.8769% ( 2) 00:14:59.420 8.459 - 8.515: 99.8830% ( 1) 00:14:59.420 8.904 - 8.960: 99.8892% ( 1) 00:14:59.420 9.572 - 9.628: 99.8954% ( 1) 00:14:59.420 13.523 - 13.579: 99.9015% ( 1) 00:14:59.420 14.692 - 14.803: 99.9077% ( 1) 00:14:59.420 3989.148 - 4017.642: 100.0000% ( 15) 00:14:59.420 00:14:59.420 Complete histogram 00:14:59.420 ================== 00:14:59.420 Range in us Cumulative Count 00:14:59.420 1.767 - [2024-07-15 19:21:09.870125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.420 1.774: 0.0246% ( 4) 00:14:59.420 1.774 - 1.781: 0.0431% ( 3) 00:14:59.420 1.781 - 1.795: 0.0800% ( 6) 00:14:59.420 1.795 - 1.809: 0.0985% ( 3) 00:14:59.420 1.809 - 1.823: 0.4247% ( 53) 00:14:59.420 1.823 - 1.837: 5.5894% ( 839) 00:14:59.420 1.837 - 1.850: 10.2493% ( 757) 00:14:59.420 1.850 - 1.864: 12.2438% ( 324) 00:14:59.420 1.864 - 1.878: 35.9680% ( 3854) 00:14:59.420 1.878 - 1.892: 80.7018% ( 7267) 00:14:59.420 1.892 - 1.906: 92.2253% ( 1872) 00:14:59.420 1.906 - 1.920: 95.1616% ( 477) 00:14:59.420 1.920 - 1.934: 96.2450% ( 176) 00:14:59.420 1.934 - 1.948: 97.1376% ( 145) 00:14:59.420 1.948 - 1.962: 98.2395% ( 179) 00:14:59.420 1.962 - 1.976: 98.9474% ( 115) 00:14:59.420 1.976 - 1.990: 99.2428% ( 48) 00:14:59.420 1.990 - 2.003: 99.2921% ( 8) 00:14:59.420 2.003 - 2.017: 99.3290% ( 6) 00:14:59.420 2.017 - 2.031: 99.3352% ( 1) 00:14:59.420 2.045 - 2.059: 99.3475% ( 2) 00:14:59.420 2.101 - 2.115: 99.3536% ( 1) 00:14:59.420 2.212 - 2.226: 99.3598% ( 1) 00:14:59.420 2.240 - 2.254: 99.3660% ( 1) 00:14:59.420 2.254 - 2.268: 99.3721% ( 1) 00:14:59.420 2.518 - 2.532: 99.3783% ( 1) 00:14:59.420 4.146 - 4.174: 99.3844% ( 1) 00:14:59.420 4.174 - 4.202: 99.3906% ( 1) 00:14:59.420 4.341 - 4.369: 99.3967% ( 1) 00:14:59.420 4.369 - 4.397: 99.4029% ( 1) 00:14:59.420 4.452 - 4.480: 99.4090% ( 1) 00:14:59.420 4.480 - 4.508: 99.4152% ( 1) 00:14:59.420 4.591 - 4.619: 99.4214% ( 1) 00:14:59.420 4.619 - 4.647: 99.4275% ( 1) 00:14:59.420 4.703 - 4.730: 99.4337% ( 1) 00:14:59.420 4.758 - 4.786: 99.4398% ( 1) 00:14:59.420 4.786 - 4.814: 99.4460% ( 1) 00:14:59.420 4.925 - 4.953: 99.4521% ( 1) 00:14:59.420 4.953 - 4.981: 99.4583% ( 1) 00:14:59.420 4.981 - 5.009: 99.4645% ( 1) 00:14:59.420 5.009 - 5.037: 99.4706% ( 1) 00:14:59.420 5.037 - 5.064: 99.4768% ( 1) 00:14:59.420 5.176 - 5.203: 99.4829% ( 1) 00:14:59.420 5.426 - 5.454: 99.4891% ( 1) 00:14:59.420 5.565 - 5.593: 99.4952% ( 1) 00:14:59.420 5.732 - 5.760: 99.5014% ( 1) 00:14:59.420 5.760 - 5.788: 99.5075% ( 1) 00:14:59.420 6.038 - 6.066: 99.5137% ( 1) 00:14:59.420 6.539 - 6.567: 99.5199% ( 1) 00:14:59.420 11.242 - 11.297: 99.5260% ( 1) 00:14:59.420 12.243 - 12.299: 99.5322% ( 1) 00:14:59.420 2023.068 - 2037.315: 99.5383% ( 1) 00:14:59.420 2991.861 - 3006.108: 99.5445% ( 1) 00:14:59.420 3618.727 - 3632.974: 99.5506% ( 1) 00:14:59.420 3989.148 - 4017.642: 99.9938% ( 72) 00:14:59.420 7978.296 - 8035.283: 100.0000% ( 1) 00:14:59.420 00:14:59.420 19:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:59.420 19:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:59.420 19:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:59.420 19:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:59.420 19:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.420 [ 00:14:59.420 { 00:14:59.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.420 "subtype": "Discovery", 00:14:59.420 "listen_addresses": [], 00:14:59.420 "allow_any_host": true, 00:14:59.420 "hosts": [] 00:14:59.420 }, 00:14:59.420 { 00:14:59.420 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.420 "subtype": "NVMe", 00:14:59.420 "listen_addresses": [ 00:14:59.420 { 00:14:59.420 "trtype": "VFIOUSER", 00:14:59.420 "adrfam": "IPv4", 00:14:59.420 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.420 "trsvcid": "0" 00:14:59.420 } 00:14:59.420 ], 00:14:59.420 "allow_any_host": true, 00:14:59.420 "hosts": [], 00:14:59.420 "serial_number": "SPDK1", 00:14:59.420 "model_number": "SPDK bdev Controller", 00:14:59.420 "max_namespaces": 32, 00:14:59.420 "min_cntlid": 1, 00:14:59.420 "max_cntlid": 65519, 00:14:59.420 "namespaces": [ 00:14:59.420 { 00:14:59.420 "nsid": 1, 00:14:59.420 "bdev_name": "Malloc1", 00:14:59.420 "name": "Malloc1", 00:14:59.420 "nguid": "893E1C1A55404E2BB3C07A35464CE11E", 00:14:59.420 "uuid": "893e1c1a-5540-4e2b-b3c0-7a35464ce11e" 00:14:59.420 } 00:14:59.420 ] 00:14:59.420 }, 00:14:59.420 { 00:14:59.420 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.420 "subtype": "NVMe", 00:14:59.420 "listen_addresses": [ 00:14:59.420 { 00:14:59.420 "trtype": "VFIOUSER", 00:14:59.420 "adrfam": "IPv4", 00:14:59.420 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.420 "trsvcid": "0" 00:14:59.420 } 00:14:59.420 ], 00:14:59.420 "allow_any_host": true, 00:14:59.420 "hosts": [], 00:14:59.420 "serial_number": "SPDK2", 00:14:59.420 "model_number": "SPDK bdev Controller", 00:14:59.420 "max_namespaces": 32, 00:14:59.420 "min_cntlid": 1, 00:14:59.420 "max_cntlid": 65519, 00:14:59.420 "namespaces": [ 00:14:59.420 { 00:14:59.420 "nsid": 1, 00:14:59.420 "bdev_name": "Malloc2", 00:14:59.420 "name": "Malloc2", 00:14:59.420 "nguid": "AD97BE04AC474565841BC20F1E597BF5", 00:14:59.420 "uuid": "ad97be04-ac47-4565-841b-c20f1e597bf5" 00:14:59.420 } 00:14:59.420 ] 00:14:59.420 } 00:14:59.420 ] 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1567475 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:59.420 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:59.420 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.420 [2024-07-15 19:21:10.247696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.678 Malloc3 00:14:59.678 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:59.678 [2024-07-15 19:21:10.505658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.678 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:59.938 Asynchronous Event Request test 00:14:59.938 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.938 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.938 Registering asynchronous event callbacks... 00:14:59.938 Starting namespace attribute notice tests for all controllers... 00:14:59.938 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:59.938 aer_cb - Changed Namespace 00:14:59.938 Cleaning up... 00:14:59.938 [ 00:14:59.938 { 00:14:59.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:59.938 "subtype": "Discovery", 00:14:59.938 "listen_addresses": [], 00:14:59.938 "allow_any_host": true, 00:14:59.938 "hosts": [] 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:59.938 "subtype": "NVMe", 00:14:59.938 "listen_addresses": [ 00:14:59.938 { 00:14:59.938 "trtype": "VFIOUSER", 00:14:59.938 "adrfam": "IPv4", 00:14:59.938 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:59.938 "trsvcid": "0" 00:14:59.938 } 00:14:59.938 ], 00:14:59.938 "allow_any_host": true, 00:14:59.938 "hosts": [], 00:14:59.938 "serial_number": "SPDK1", 00:14:59.938 "model_number": "SPDK bdev Controller", 00:14:59.938 "max_namespaces": 32, 00:14:59.938 "min_cntlid": 1, 00:14:59.938 "max_cntlid": 65519, 00:14:59.938 "namespaces": [ 00:14:59.938 { 00:14:59.938 "nsid": 1, 00:14:59.938 "bdev_name": "Malloc1", 00:14:59.938 "name": "Malloc1", 00:14:59.938 "nguid": "893E1C1A55404E2BB3C07A35464CE11E", 00:14:59.938 "uuid": "893e1c1a-5540-4e2b-b3c0-7a35464ce11e" 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "nsid": 2, 00:14:59.938 "bdev_name": "Malloc3", 00:14:59.938 "name": "Malloc3", 00:14:59.938 "nguid": "3C95755FE9C64A9B9D99C0568BC1AFD2", 00:14:59.938 "uuid": "3c95755f-e9c6-4a9b-9d99-c0568bc1afd2" 00:14:59.938 } 00:14:59.938 ] 00:14:59.938 }, 00:14:59.938 { 00:14:59.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:59.938 "subtype": "NVMe", 00:14:59.938 "listen_addresses": [ 00:14:59.938 { 00:14:59.938 "trtype": "VFIOUSER", 00:14:59.938 "adrfam": "IPv4", 00:14:59.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:59.938 "trsvcid": "0" 00:14:59.938 } 00:14:59.938 ], 00:14:59.938 "allow_any_host": true, 00:14:59.938 "hosts": [], 00:14:59.938 "serial_number": "SPDK2", 00:14:59.938 "model_number": "SPDK bdev Controller", 00:14:59.938 "max_namespaces": 32, 00:14:59.938 "min_cntlid": 1, 00:14:59.938 "max_cntlid": 65519, 00:14:59.938 "namespaces": [ 00:14:59.938 { 00:14:59.938 "nsid": 1, 00:14:59.938 "bdev_name": "Malloc2", 00:14:59.938 "name": "Malloc2", 00:14:59.938 "nguid": "AD97BE04AC474565841BC20F1E597BF5", 00:14:59.938 "uuid": "ad97be04-ac47-4565-841b-c20f1e597bf5" 00:14:59.938 } 00:14:59.938 ] 00:14:59.938 } 00:14:59.938 ] 00:14:59.938 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1567475 00:14:59.938 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.938 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:59.938 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:59.938 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.938 [2024-07-15 19:21:10.744923] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:14:59.938 [2024-07-15 19:21:10.744970] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567664 ] 00:14:59.938 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.938 [2024-07-15 19:21:10.757623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.938 [2024-07-15 19:21:10.773626] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:59.938 [2024-07-15 19:21:10.783473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.938 [2024-07-15 19:21:10.783495] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f399d1a8000 00:14:59.938 [2024-07-15 19:21:10.784478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.785481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.786489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.787492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.788507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.789515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.790518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.938 [2024-07-15 19:21:10.791521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.199 [2024-07-15 19:21:10.792532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.199 [2024-07-15 19:21:10.792544] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f399bf6d000 00:15:00.200 [2024-07-15 19:21:10.793489] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.200 [2024-07-15 19:21:10.805676] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:00.200 [2024-07-15 19:21:10.805698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:00.200 [2024-07-15 19:21:10.810802] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:00.200 [2024-07-15 19:21:10.810837] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.200 [2024-07-15 19:21:10.810902] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:00.200 [2024-07-15 19:21:10.810915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:00.200 [2024-07-15 19:21:10.810920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:00.200 [2024-07-15 19:21:10.811801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:00.200 [2024-07-15 19:21:10.811810] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:00.200 [2024-07-15 19:21:10.811817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:00.200 [2024-07-15 19:21:10.812810] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:00.200 [2024-07-15 19:21:10.812818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:00.200 [2024-07-15 19:21:10.812824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.813815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:00.200 [2024-07-15 19:21:10.813823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.814820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:00.200 [2024-07-15 19:21:10.814828] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:00.200 [2024-07-15 19:21:10.814833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.814838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.814943] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:00.200 [2024-07-15 19:21:10.814947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.814952] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:00.200 [2024-07-15 19:21:10.815830] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:00.200 [2024-07-15 19:21:10.816837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:00.200 [2024-07-15 19:21:10.817845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:00.200 [2024-07-15 19:21:10.818843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.200 [2024-07-15 19:21:10.818880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.200 [2024-07-15 19:21:10.819854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:00.200 [2024-07-15 19:21:10.819862] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.200 [2024-07-15 19:21:10.819866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.819883] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:00.200 [2024-07-15 19:21:10.819893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.819903] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.200 [2024-07-15 19:21:10.819908] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.200 [2024-07-15 19:21:10.819919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.827231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.827242] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:00.200 [2024-07-15 19:21:10.827249] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:00.200 [2024-07-15 19:21:10.827253] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:00.200 [2024-07-15 19:21:10.827257] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.200 [2024-07-15 19:21:10.827261] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:00.200 [2024-07-15 19:21:10.827266] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:00.200 [2024-07-15 19:21:10.827270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.827277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.827286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.835230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.835246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.200 [2024-07-15 19:21:10.835254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.200 [2024-07-15 19:21:10.835262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.200 [2024-07-15 19:21:10.835269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.200 [2024-07-15 19:21:10.835273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.835283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.835292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.843230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.843238] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:00.200 [2024-07-15 19:21:10.843243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.843249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.843254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.843262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.851229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.851283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.851289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.851296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.200 [2024-07-15 19:21:10.851300] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.200 [2024-07-15 19:21:10.851306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.859229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.859239] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:00.200 [2024-07-15 19:21:10.859247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.859254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.859260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.200 [2024-07-15 19:21:10.859264] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.200 [2024-07-15 19:21:10.859270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.867230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.867243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.867250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.867256] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.200 [2024-07-15 19:21:10.867262] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.200 [2024-07-15 19:21:10.867269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.200 [2024-07-15 19:21:10.875228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.200 [2024-07-15 19:21:10.875238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.875243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.875252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:00.200 [2024-07-15 19:21:10.875257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:00.201 [2024-07-15 19:21:10.875262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.201 [2024-07-15 19:21:10.875266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:00.201 [2024-07-15 19:21:10.875271] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.201 [2024-07-15 19:21:10.875275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:00.201 [2024-07-15 19:21:10.875280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:00.201 [2024-07-15 19:21:10.875295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.883230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.883244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.891229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.891241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.899229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.899241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.907229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.907245] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.201 [2024-07-15 19:21:10.907250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.201 [2024-07-15 19:21:10.907253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.201 [2024-07-15 19:21:10.907257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.201 [2024-07-15 19:21:10.907262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.201 [2024-07-15 19:21:10.907269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.201 [2024-07-15 19:21:10.907277] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.201 [2024-07-15 19:21:10.907282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.907289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.201 [2024-07-15 19:21:10.907293] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.201 [2024-07-15 19:21:10.907298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.907304] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.201 [2024-07-15 19:21:10.907308] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.201 [2024-07-15 19:21:10.907314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.201 [2024-07-15 19:21:10.915230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.915244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.915253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.201 [2024-07-15 19:21:10.915259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.201 ===================================================== 00:15:00.201 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:00.201 ===================================================== 00:15:00.201 Controller Capabilities/Features 00:15:00.201 ================================ 00:15:00.201 Vendor ID: 4e58 00:15:00.201 Subsystem Vendor ID: 4e58 00:15:00.201 Serial Number: SPDK2 00:15:00.201 Model Number: SPDK bdev Controller 00:15:00.201 Firmware Version: 24.09 00:15:00.201 Recommended Arb Burst: 6 00:15:00.201 IEEE OUI Identifier: 8d 6b 50 00:15:00.201 Multi-path I/O 00:15:00.201 May have multiple subsystem ports: Yes 00:15:00.201 May have multiple controllers: Yes 00:15:00.201 Associated with SR-IOV VF: No 00:15:00.201 Max Data Transfer Size: 131072 00:15:00.201 Max Number of Namespaces: 32 00:15:00.201 Max Number of I/O Queues: 127 00:15:00.201 NVMe Specification Version (VS): 1.3 00:15:00.201 NVMe Specification Version (Identify): 1.3 00:15:00.201 Maximum Queue Entries: 256 00:15:00.201 Contiguous Queues Required: Yes 00:15:00.201 Arbitration Mechanisms Supported 00:15:00.201 Weighted Round Robin: Not Supported 00:15:00.201 Vendor Specific: Not Supported 00:15:00.201 Reset Timeout: 15000 ms 00:15:00.201 Doorbell Stride: 4 bytes 00:15:00.201 NVM Subsystem Reset: Not Supported 00:15:00.201 Command Sets Supported 00:15:00.201 NVM Command Set: Supported 00:15:00.201 Boot Partition: Not Supported 00:15:00.201 Memory Page Size Minimum: 4096 bytes 00:15:00.201 Memory Page Size Maximum: 4096 bytes 00:15:00.201 Persistent Memory Region: Not Supported 00:15:00.201 Optional Asynchronous Events Supported 00:15:00.201 Namespace Attribute Notices: Supported 00:15:00.201 Firmware Activation Notices: Not Supported 00:15:00.201 ANA Change Notices: Not Supported 00:15:00.201 PLE Aggregate Log Change Notices: Not Supported 00:15:00.201 LBA Status Info Alert Notices: Not Supported 00:15:00.201 EGE Aggregate Log Change Notices: Not Supported 00:15:00.201 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.201 Zone Descriptor Change Notices: Not Supported 00:15:00.201 Discovery Log Change Notices: Not Supported 00:15:00.201 Controller Attributes 00:15:00.201 128-bit Host Identifier: Supported 00:15:00.201 Non-Operational Permissive Mode: Not Supported 00:15:00.201 NVM Sets: Not Supported 00:15:00.201 Read Recovery Levels: Not Supported 00:15:00.201 Endurance Groups: Not Supported 00:15:00.201 Predictable Latency Mode: Not Supported 00:15:00.201 Traffic Based Keep ALive: Not Supported 00:15:00.201 Namespace Granularity: Not Supported 00:15:00.201 SQ Associations: Not Supported 00:15:00.201 UUID List: Not Supported 00:15:00.201 Multi-Domain Subsystem: Not Supported 00:15:00.201 Fixed Capacity Management: Not Supported 00:15:00.201 Variable Capacity Management: Not Supported 00:15:00.201 Delete Endurance Group: Not Supported 00:15:00.201 Delete NVM Set: Not Supported 00:15:00.201 Extended LBA Formats Supported: Not Supported 00:15:00.201 Flexible Data Placement Supported: Not Supported 00:15:00.201 00:15:00.201 Controller Memory Buffer Support 00:15:00.201 ================================ 00:15:00.201 Supported: No 00:15:00.201 00:15:00.201 Persistent Memory Region Support 00:15:00.201 ================================ 00:15:00.201 Supported: No 00:15:00.201 00:15:00.201 Admin Command Set Attributes 00:15:00.201 ============================ 00:15:00.201 Security Send/Receive: Not Supported 00:15:00.201 Format NVM: Not Supported 00:15:00.201 Firmware Activate/Download: Not Supported 00:15:00.201 Namespace Management: Not Supported 00:15:00.201 Device Self-Test: Not Supported 00:15:00.201 Directives: Not Supported 00:15:00.201 NVMe-MI: Not Supported 00:15:00.201 Virtualization Management: Not Supported 00:15:00.201 Doorbell Buffer Config: Not Supported 00:15:00.201 Get LBA Status Capability: Not Supported 00:15:00.201 Command & Feature Lockdown Capability: Not Supported 00:15:00.201 Abort Command Limit: 4 00:15:00.201 Async Event Request Limit: 4 00:15:00.201 Number of Firmware Slots: N/A 00:15:00.201 Firmware Slot 1 Read-Only: N/A 00:15:00.201 Firmware Activation Without Reset: N/A 00:15:00.201 Multiple Update Detection Support: N/A 00:15:00.201 Firmware Update Granularity: No Information Provided 00:15:00.201 Per-Namespace SMART Log: No 00:15:00.201 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.201 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:00.201 Command Effects Log Page: Supported 00:15:00.201 Get Log Page Extended Data: Supported 00:15:00.201 Telemetry Log Pages: Not Supported 00:15:00.201 Persistent Event Log Pages: Not Supported 00:15:00.201 Supported Log Pages Log Page: May Support 00:15:00.201 Commands Supported & Effects Log Page: Not Supported 00:15:00.201 Feature Identifiers & Effects Log Page:May Support 00:15:00.201 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.201 Data Area 4 for Telemetry Log: Not Supported 00:15:00.201 Error Log Page Entries Supported: 128 00:15:00.201 Keep Alive: Supported 00:15:00.201 Keep Alive Granularity: 10000 ms 00:15:00.201 00:15:00.201 NVM Command Set Attributes 00:15:00.201 ========================== 00:15:00.201 Submission Queue Entry Size 00:15:00.201 Max: 64 00:15:00.201 Min: 64 00:15:00.201 Completion Queue Entry Size 00:15:00.201 Max: 16 00:15:00.201 Min: 16 00:15:00.201 Number of Namespaces: 32 00:15:00.201 Compare Command: Supported 00:15:00.201 Write Uncorrectable Command: Not Supported 00:15:00.201 Dataset Management Command: Supported 00:15:00.201 Write Zeroes Command: Supported 00:15:00.201 Set Features Save Field: Not Supported 00:15:00.201 Reservations: Not Supported 00:15:00.201 Timestamp: Not Supported 00:15:00.201 Copy: Supported 00:15:00.201 Volatile Write Cache: Present 00:15:00.201 Atomic Write Unit (Normal): 1 00:15:00.201 Atomic Write Unit (PFail): 1 00:15:00.201 Atomic Compare & Write Unit: 1 00:15:00.201 Fused Compare & Write: Supported 00:15:00.201 Scatter-Gather List 00:15:00.201 SGL Command Set: Supported (Dword aligned) 00:15:00.201 SGL Keyed: Not Supported 00:15:00.201 SGL Bit Bucket Descriptor: Not Supported 00:15:00.201 SGL Metadata Pointer: Not Supported 00:15:00.201 Oversized SGL: Not Supported 00:15:00.201 SGL Metadata Address: Not Supported 00:15:00.202 SGL Offset: Not Supported 00:15:00.202 Transport SGL Data Block: Not Supported 00:15:00.202 Replay Protected Memory Block: Not Supported 00:15:00.202 00:15:00.202 Firmware Slot Information 00:15:00.202 ========================= 00:15:00.202 Active slot: 1 00:15:00.202 Slot 1 Firmware Revision: 24.09 00:15:00.202 00:15:00.202 00:15:00.202 Commands Supported and Effects 00:15:00.202 ============================== 00:15:00.202 Admin Commands 00:15:00.202 -------------- 00:15:00.202 Get Log Page (02h): Supported 00:15:00.202 Identify (06h): Supported 00:15:00.202 Abort (08h): Supported 00:15:00.202 Set Features (09h): Supported 00:15:00.202 Get Features (0Ah): Supported 00:15:00.202 Asynchronous Event Request (0Ch): Supported 00:15:00.202 Keep Alive (18h): Supported 00:15:00.202 I/O Commands 00:15:00.202 ------------ 00:15:00.202 Flush (00h): Supported LBA-Change 00:15:00.202 Write (01h): Supported LBA-Change 00:15:00.202 Read (02h): Supported 00:15:00.202 Compare (05h): Supported 00:15:00.202 Write Zeroes (08h): Supported LBA-Change 00:15:00.202 Dataset Management (09h): Supported LBA-Change 00:15:00.202 Copy (19h): Supported LBA-Change 00:15:00.202 00:15:00.202 Error Log 00:15:00.202 ========= 00:15:00.202 00:15:00.202 Arbitration 00:15:00.202 =========== 00:15:00.202 Arbitration Burst: 1 00:15:00.202 00:15:00.202 Power Management 00:15:00.202 ================ 00:15:00.202 Number of Power States: 1 00:15:00.202 Current Power State: Power State #0 00:15:00.202 Power State #0: 00:15:00.202 Max Power: 0.00 W 00:15:00.202 Non-Operational State: Operational 00:15:00.202 Entry Latency: Not Reported 00:15:00.202 Exit Latency: Not Reported 00:15:00.202 Relative Read Throughput: 0 00:15:00.202 Relative Read Latency: 0 00:15:00.202 Relative Write Throughput: 0 00:15:00.202 Relative Write Latency: 0 00:15:00.202 Idle Power: Not Reported 00:15:00.202 Active Power: Not Reported 00:15:00.202 Non-Operational Permissive Mode: Not Supported 00:15:00.202 00:15:00.202 Health Information 00:15:00.202 ================== 00:15:00.202 Critical Warnings: 00:15:00.202 Available Spare Space: OK 00:15:00.202 Temperature: OK 00:15:00.202 Device Reliability: OK 00:15:00.202 Read Only: No 00:15:00.202 Volatile Memory Backup: OK 00:15:00.202 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:00.202 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.202 Available Spare: 0% 00:15:00.202 Available Sp[2024-07-15 19:21:10.915344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.202 [2024-07-15 19:21:10.923229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.202 [2024-07-15 19:21:10.923257] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:00.202 [2024-07-15 19:21:10.923266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.202 [2024-07-15 19:21:10.923271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.202 [2024-07-15 19:21:10.923277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.202 [2024-07-15 19:21:10.923282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.202 [2024-07-15 19:21:10.923337] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:00.202 [2024-07-15 19:21:10.923346] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:00.202 [2024-07-15 19:21:10.924339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.202 [2024-07-15 19:21:10.924382] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:00.202 [2024-07-15 19:21:10.924388] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:00.202 [2024-07-15 19:21:10.925346] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:00.202 [2024-07-15 19:21:10.925356] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:00.202 [2024-07-15 19:21:10.925401] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:00.202 [2024-07-15 19:21:10.926382] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.202 are Threshold: 0% 00:15:00.202 Life Percentage Used: 0% 00:15:00.202 Data Units Read: 0 00:15:00.202 Data Units Written: 0 00:15:00.202 Host Read Commands: 0 00:15:00.202 Host Write Commands: 0 00:15:00.202 Controller Busy Time: 0 minutes 00:15:00.202 Power Cycles: 0 00:15:00.202 Power On Hours: 0 hours 00:15:00.202 Unsafe Shutdowns: 0 00:15:00.202 Unrecoverable Media Errors: 0 00:15:00.202 Lifetime Error Log Entries: 0 00:15:00.202 Warning Temperature Time: 0 minutes 00:15:00.202 Critical Temperature Time: 0 minutes 00:15:00.202 00:15:00.202 Number of Queues 00:15:00.202 ================ 00:15:00.202 Number of I/O Submission Queues: 127 00:15:00.202 Number of I/O Completion Queues: 127 00:15:00.202 00:15:00.202 Active Namespaces 00:15:00.202 ================= 00:15:00.202 Namespace ID:1 00:15:00.202 Error Recovery Timeout: Unlimited 00:15:00.202 Command Set Identifier: NVM (00h) 00:15:00.202 Deallocate: Supported 00:15:00.202 Deallocated/Unwritten Error: Not Supported 00:15:00.202 Deallocated Read Value: Unknown 00:15:00.202 Deallocate in Write Zeroes: Not Supported 00:15:00.202 Deallocated Guard Field: 0xFFFF 00:15:00.202 Flush: Supported 00:15:00.202 Reservation: Supported 00:15:00.202 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.202 Size (in LBAs): 131072 (0GiB) 00:15:00.202 Capacity (in LBAs): 131072 (0GiB) 00:15:00.202 Utilization (in LBAs): 131072 (0GiB) 00:15:00.202 NGUID: AD97BE04AC474565841BC20F1E597BF5 00:15:00.202 UUID: ad97be04-ac47-4565-841b-c20f1e597bf5 00:15:00.202 Thin Provisioning: Not Supported 00:15:00.202 Per-NS Atomic Units: Yes 00:15:00.202 Atomic Boundary Size (Normal): 0 00:15:00.202 Atomic Boundary Size (PFail): 0 00:15:00.202 Atomic Boundary Offset: 0 00:15:00.202 Maximum Single Source Range Length: 65535 00:15:00.202 Maximum Copy Length: 65535 00:15:00.202 Maximum Source Range Count: 1 00:15:00.202 NGUID/EUI64 Never Reused: No 00:15:00.202 Namespace Write Protected: No 00:15:00.202 Number of LBA Formats: 1 00:15:00.202 Current LBA Format: LBA Format #00 00:15:00.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.202 00:15:00.202 19:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.202 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.462 [2024-07-15 19:21:11.139577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.793 Initializing NVMe Controllers 00:15:05.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:05.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:05.793 Initialization complete. Launching workers. 00:15:05.793 ======================================================== 00:15:05.793 Latency(us) 00:15:05.793 Device Information : IOPS MiB/s Average min max 00:15:05.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39943.26 156.03 3204.37 967.03 6638.43 00:15:05.793 ======================================================== 00:15:05.793 Total : 39943.26 156.03 3204.37 967.03 6638.43 00:15:05.793 00:15:05.793 [2024-07-15 19:21:16.249473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.793 19:21:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.793 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.793 [2024-07-15 19:21:16.465096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.063 Initializing NVMe Controllers 00:15:11.063 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.063 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:11.063 Initialization complete. Launching workers. 00:15:11.063 ======================================================== 00:15:11.063 Latency(us) 00:15:11.063 Device Information : IOPS MiB/s Average min max 00:15:11.063 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39899.89 155.86 3207.63 979.83 7470.43 00:15:11.063 ======================================================== 00:15:11.063 Total : 39899.89 155.86 3207.63 979.83 7470.43 00:15:11.063 00:15:11.063 [2024-07-15 19:21:21.482454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.063 19:21:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.063 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.063 [2024-07-15 19:21:21.678874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.356 [2024-07-15 19:21:26.826323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.356 Initializing NVMe Controllers 00:15:16.356 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.356 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.356 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:16.356 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:16.356 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:16.356 Initialization complete. Launching workers. 00:15:16.356 Starting thread on core 2 00:15:16.356 Starting thread on core 3 00:15:16.356 Starting thread on core 1 00:15:16.356 19:21:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:16.356 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.356 [2024-07-15 19:21:27.100695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.643 [2024-07-15 19:21:30.178708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.643 Initializing NVMe Controllers 00:15:19.643 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.643 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.643 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:19.643 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:19.643 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:19.643 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:19.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.643 Initialization complete. Launching workers. 00:15:19.643 Starting thread on core 1 with urgent priority queue 00:15:19.643 Starting thread on core 2 with urgent priority queue 00:15:19.643 Starting thread on core 3 with urgent priority queue 00:15:19.643 Starting thread on core 0 with urgent priority queue 00:15:19.643 SPDK bdev Controller (SPDK2 ) core 0: 9080.00 IO/s 11.01 secs/100000 ios 00:15:19.643 SPDK bdev Controller (SPDK2 ) core 1: 7658.67 IO/s 13.06 secs/100000 ios 00:15:19.643 SPDK bdev Controller (SPDK2 ) core 2: 11541.33 IO/s 8.66 secs/100000 ios 00:15:19.643 SPDK bdev Controller (SPDK2 ) core 3: 7130.00 IO/s 14.03 secs/100000 ios 00:15:19.643 ======================================================== 00:15:19.643 00:15:19.643 19:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.643 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.643 [2024-07-15 19:21:30.442848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.643 Initializing NVMe Controllers 00:15:19.643 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.643 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:19.643 Namespace ID: 1 size: 0GB 00:15:19.643 Initialization complete. 00:15:19.643 INFO: using host memory buffer for IO 00:15:19.643 Hello world! 00:15:19.643 [2024-07-15 19:21:30.454920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.643 19:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:19.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.901 [2024-07-15 19:21:30.725267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.277 Initializing NVMe Controllers 00:15:21.277 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.277 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.277 Initialization complete. Launching workers. 00:15:21.277 submit (in ns) avg, min, max = 7440.5, 3263.5, 4000496.5 00:15:21.277 complete (in ns) avg, min, max = 19961.8, 1765.2, 4000034.8 00:15:21.277 00:15:21.277 Submit histogram 00:15:21.277 ================ 00:15:21.277 Range in us Cumulative Count 00:15:21.277 3.256 - 3.270: 0.0186% ( 3) 00:15:21.277 3.270 - 3.283: 0.0683% ( 8) 00:15:21.277 3.283 - 3.297: 0.2173% ( 24) 00:15:21.277 3.297 - 3.311: 1.0677% ( 137) 00:15:21.277 3.311 - 3.325: 3.5382% ( 398) 00:15:21.277 3.325 - 3.339: 8.3116% ( 769) 00:15:21.277 3.339 - 3.353: 14.0844% ( 930) 00:15:21.277 3.353 - 3.367: 20.4097% ( 1019) 00:15:21.277 3.367 - 3.381: 26.3501% ( 957) 00:15:21.277 3.381 - 3.395: 31.7443% ( 869) 00:15:21.277 3.395 - 3.409: 37.0639% ( 857) 00:15:21.277 3.409 - 3.423: 42.5016% ( 876) 00:15:21.277 3.423 - 3.437: 46.7163% ( 679) 00:15:21.277 3.437 - 3.450: 50.8070% ( 659) 00:15:21.277 3.450 - 3.464: 55.4811% ( 753) 00:15:21.277 3.464 - 3.478: 61.9553% ( 1043) 00:15:21.277 3.478 - 3.492: 67.3495% ( 869) 00:15:21.277 3.492 - 3.506: 71.8125% ( 719) 00:15:21.277 3.506 - 3.520: 76.6418% ( 778) 00:15:21.277 3.520 - 3.534: 80.8318% ( 675) 00:15:21.277 3.534 - 3.548: 83.5940% ( 445) 00:15:21.277 3.548 - 3.562: 85.4687% ( 302) 00:15:21.277 3.562 - 3.590: 87.1943% ( 278) 00:15:21.277 3.590 - 3.617: 88.1006% ( 146) 00:15:21.277 3.617 - 3.645: 89.6896% ( 256) 00:15:21.277 3.645 - 3.673: 91.4463% ( 283) 00:15:21.277 3.673 - 3.701: 93.0540% ( 259) 00:15:21.277 3.701 - 3.729: 94.6307% ( 254) 00:15:21.277 3.729 - 3.757: 96.2880% ( 267) 00:15:21.277 3.757 - 3.784: 97.7157% ( 230) 00:15:21.277 3.784 - 3.812: 98.5164% ( 129) 00:15:21.277 3.812 - 3.840: 99.0503% ( 86) 00:15:21.277 3.840 - 3.868: 99.3669% ( 51) 00:15:21.277 3.868 - 3.896: 99.4910% ( 20) 00:15:21.277 3.896 - 3.923: 99.5779% ( 14) 00:15:21.277 3.923 - 3.951: 99.6027% ( 4) 00:15:21.277 3.951 - 3.979: 99.6151% ( 2) 00:15:21.277 3.979 - 4.007: 99.6214% ( 1) 00:15:21.277 4.063 - 4.090: 99.6276% ( 1) 00:15:21.277 4.090 - 4.118: 99.6338% ( 1) 00:15:21.277 4.257 - 4.285: 99.6400% ( 1) 00:15:21.277 5.120 - 5.148: 99.6462% ( 1) 00:15:21.277 5.176 - 5.203: 99.6524% ( 1) 00:15:21.277 5.315 - 5.343: 99.6648% ( 2) 00:15:21.277 5.398 - 5.426: 99.6710% ( 1) 00:15:21.277 5.510 - 5.537: 99.6834% ( 2) 00:15:21.277 5.537 - 5.565: 99.6896% ( 1) 00:15:21.277 5.593 - 5.621: 99.6958% ( 1) 00:15:21.277 5.621 - 5.649: 99.7020% ( 1) 00:15:21.277 5.677 - 5.704: 99.7083% ( 1) 00:15:21.277 5.788 - 5.816: 99.7145% ( 1) 00:15:21.277 5.816 - 5.843: 99.7269% ( 2) 00:15:21.277 5.899 - 5.927: 99.7331% ( 1) 00:15:21.277 5.927 - 5.955: 99.7393% ( 1) 00:15:21.277 6.010 - 6.038: 99.7517% ( 2) 00:15:21.277 6.038 - 6.066: 99.7579% ( 1) 00:15:21.277 6.066 - 6.094: 99.7641% ( 1) 00:15:21.277 6.094 - 6.122: 99.7703% ( 1) 00:15:21.277 6.122 - 6.150: 99.7765% ( 1) 00:15:21.277 6.177 - 6.205: 99.7827% ( 1) 00:15:21.277 6.233 - 6.261: 99.7890% ( 1) 00:15:21.277 6.289 - 6.317: 99.8014% ( 2) 00:15:21.277 6.428 - 6.456: 99.8076% ( 1) 00:15:21.277 6.511 - 6.539: 99.8138% ( 1) 00:15:21.277 6.595 - 6.623: 99.8200% ( 1) 00:15:21.277 6.762 - 6.790: 99.8262% ( 1) 00:15:21.277 6.790 - 6.817: 99.8386% ( 2) 00:15:21.277 6.817 - 6.845: 99.8448% ( 1) 00:15:21.277 7.123 - 7.179: 99.8572% ( 2) 00:15:21.277 7.235 - 7.290: 99.8696% ( 2) 00:15:21.277 7.513 - 7.569: 99.8759% ( 1) 00:15:21.277 7.624 - 7.680: 99.8821% ( 1) 00:15:21.277 8.014 - 8.070: 99.8883% ( 1) 00:15:21.277 8.960 - 9.016: 99.8945% ( 1) 00:15:21.277 15.583 - 15.694: 99.9007% ( 1) 00:15:21.277 3989.148 - 4017.642: 100.0000% ( 16) 00:15:21.277 00:15:21.278 Complete histogram 00:15:21.278 ================== 00:15:21.278 Range in us Cumulative Count 00:15:21.278 1.760 - 1.767: 0.0124% ( 2) 00:15:21.278 1.767 - [2024-07-15 19:21:31.818279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.278 1.774: 0.0248% ( 2) 00:15:21.278 1.774 - 1.781: 0.0869% ( 10) 00:15:21.278 1.781 - 1.795: 0.1428% ( 9) 00:15:21.278 1.795 - 1.809: 0.1490% ( 1) 00:15:21.278 1.809 - 1.823: 3.4327% ( 529) 00:15:21.278 1.823 - 1.837: 38.6344% ( 5671) 00:15:21.278 1.837 - 1.850: 57.9702% ( 3115) 00:15:21.278 1.850 - 1.864: 61.3222% ( 540) 00:15:21.278 1.864 - 1.878: 74.9100% ( 2189) 00:15:21.278 1.878 - 1.892: 92.0608% ( 2763) 00:15:21.278 1.892 - 1.906: 95.8721% ( 614) 00:15:21.278 1.906 - 1.920: 97.6785% ( 291) 00:15:21.278 1.920 - 1.934: 98.2992% ( 100) 00:15:21.278 1.934 - 1.948: 98.6344% ( 54) 00:15:21.278 1.948 - 1.962: 98.9323% ( 48) 00:15:21.278 1.962 - 1.976: 99.0875% ( 25) 00:15:21.278 1.976 - 1.990: 99.1868% ( 16) 00:15:21.278 1.990 - 2.003: 99.2427% ( 9) 00:15:21.278 2.003 - 2.017: 99.2800% ( 6) 00:15:21.278 2.017 - 2.031: 99.3048% ( 4) 00:15:21.278 2.031 - 2.045: 99.3234% ( 3) 00:15:21.278 2.045 - 2.059: 99.3296% ( 1) 00:15:21.278 2.059 - 2.073: 99.3358% ( 1) 00:15:21.278 2.087 - 2.101: 99.3420% ( 1) 00:15:21.278 2.268 - 2.282: 99.3482% ( 1) 00:15:21.278 2.296 - 2.310: 99.3606% ( 2) 00:15:21.278 2.310 - 2.323: 99.3731% ( 2) 00:15:21.278 2.323 - 2.337: 99.3793% ( 1) 00:15:21.278 3.464 - 3.478: 99.3855% ( 1) 00:15:21.278 3.492 - 3.506: 99.3917% ( 1) 00:15:21.278 3.534 - 3.548: 99.3979% ( 1) 00:15:21.278 3.645 - 3.673: 99.4041% ( 1) 00:15:21.278 4.090 - 4.118: 99.4103% ( 1) 00:15:21.278 4.619 - 4.647: 99.4165% ( 1) 00:15:21.278 4.647 - 4.675: 99.4227% ( 1) 00:15:21.278 4.786 - 4.814: 99.4351% ( 2) 00:15:21.278 4.842 - 4.870: 99.4413% ( 1) 00:15:21.278 4.897 - 4.925: 99.4475% ( 1) 00:15:21.278 4.981 - 5.009: 99.4538% ( 1) 00:15:21.278 5.148 - 5.176: 99.4600% ( 1) 00:15:21.278 5.176 - 5.203: 99.4662% ( 1) 00:15:21.278 5.315 - 5.343: 99.4724% ( 1) 00:15:21.278 5.370 - 5.398: 99.4786% ( 1) 00:15:21.278 5.537 - 5.565: 99.4848% ( 1) 00:15:21.278 5.871 - 5.899: 99.4972% ( 2) 00:15:21.278 6.038 - 6.066: 99.5034% ( 1) 00:15:21.278 6.066 - 6.094: 99.5096% ( 1) 00:15:21.278 6.929 - 6.957: 99.5220% ( 2) 00:15:21.278 7.096 - 7.123: 99.5282% ( 1) 00:15:21.278 9.906 - 9.962: 99.5345% ( 1) 00:15:21.278 14.358 - 14.470: 99.5407% ( 1) 00:15:21.278 38.957 - 39.179: 99.5469% ( 1) 00:15:21.278 3989.148 - 4017.642: 100.0000% ( 73) 00:15:21.278 00:15:21.278 19:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:21.278 19:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.278 19:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.278 19:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:21.278 19:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.278 [ 00:15:21.278 { 00:15:21.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.278 "subtype": "Discovery", 00:15:21.278 "listen_addresses": [], 00:15:21.278 "allow_any_host": true, 00:15:21.278 "hosts": [] 00:15:21.278 }, 00:15:21.278 { 00:15:21.278 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.278 "subtype": "NVMe", 00:15:21.278 "listen_addresses": [ 00:15:21.278 { 00:15:21.278 "trtype": "VFIOUSER", 00:15:21.278 "adrfam": "IPv4", 00:15:21.278 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.278 "trsvcid": "0" 00:15:21.278 } 00:15:21.278 ], 00:15:21.278 "allow_any_host": true, 00:15:21.278 "hosts": [], 00:15:21.278 "serial_number": "SPDK1", 00:15:21.278 "model_number": "SPDK bdev Controller", 00:15:21.278 "max_namespaces": 32, 00:15:21.278 "min_cntlid": 1, 00:15:21.278 "max_cntlid": 65519, 00:15:21.278 "namespaces": [ 00:15:21.278 { 00:15:21.278 "nsid": 1, 00:15:21.278 "bdev_name": "Malloc1", 00:15:21.278 "name": "Malloc1", 00:15:21.278 "nguid": "893E1C1A55404E2BB3C07A35464CE11E", 00:15:21.278 "uuid": "893e1c1a-5540-4e2b-b3c0-7a35464ce11e" 00:15:21.278 }, 00:15:21.278 { 00:15:21.278 "nsid": 2, 00:15:21.278 "bdev_name": "Malloc3", 00:15:21.278 "name": "Malloc3", 00:15:21.278 "nguid": "3C95755FE9C64A9B9D99C0568BC1AFD2", 00:15:21.278 "uuid": "3c95755f-e9c6-4a9b-9d99-c0568bc1afd2" 00:15:21.278 } 00:15:21.278 ] 00:15:21.278 }, 00:15:21.278 { 00:15:21.278 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.278 "subtype": "NVMe", 00:15:21.278 "listen_addresses": [ 00:15:21.278 { 00:15:21.278 "trtype": "VFIOUSER", 00:15:21.278 "adrfam": "IPv4", 00:15:21.278 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.278 "trsvcid": "0" 00:15:21.278 } 00:15:21.278 ], 00:15:21.278 "allow_any_host": true, 00:15:21.278 "hosts": [], 00:15:21.278 "serial_number": "SPDK2", 00:15:21.278 "model_number": "SPDK bdev Controller", 00:15:21.278 "max_namespaces": 32, 00:15:21.278 "min_cntlid": 1, 00:15:21.278 "max_cntlid": 65519, 00:15:21.278 "namespaces": [ 00:15:21.278 { 00:15:21.278 "nsid": 1, 00:15:21.278 "bdev_name": "Malloc2", 00:15:21.278 "name": "Malloc2", 00:15:21.278 "nguid": "AD97BE04AC474565841BC20F1E597BF5", 00:15:21.278 "uuid": "ad97be04-ac47-4565-841b-c20f1e597bf5" 00:15:21.278 } 00:15:21.278 ] 00:15:21.278 } 00:15:21.278 ] 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1571117 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.278 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:21.278 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.574 [2024-07-15 19:21:32.177600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.574 Malloc4 00:15:21.574 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:21.574 [2024-07-15 19:21:32.403380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.574 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.832 Asynchronous Event Request test 00:15:21.832 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.832 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.832 Registering asynchronous event callbacks... 00:15:21.832 Starting namespace attribute notice tests for all controllers... 00:15:21.832 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.832 aer_cb - Changed Namespace 00:15:21.832 Cleaning up... 00:15:21.832 [ 00:15:21.832 { 00:15:21.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.832 "subtype": "Discovery", 00:15:21.832 "listen_addresses": [], 00:15:21.832 "allow_any_host": true, 00:15:21.832 "hosts": [] 00:15:21.832 }, 00:15:21.832 { 00:15:21.832 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.832 "subtype": "NVMe", 00:15:21.832 "listen_addresses": [ 00:15:21.832 { 00:15:21.832 "trtype": "VFIOUSER", 00:15:21.832 "adrfam": "IPv4", 00:15:21.832 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.832 "trsvcid": "0" 00:15:21.832 } 00:15:21.832 ], 00:15:21.832 "allow_any_host": true, 00:15:21.832 "hosts": [], 00:15:21.832 "serial_number": "SPDK1", 00:15:21.832 "model_number": "SPDK bdev Controller", 00:15:21.832 "max_namespaces": 32, 00:15:21.832 "min_cntlid": 1, 00:15:21.832 "max_cntlid": 65519, 00:15:21.832 "namespaces": [ 00:15:21.832 { 00:15:21.832 "nsid": 1, 00:15:21.832 "bdev_name": "Malloc1", 00:15:21.832 "name": "Malloc1", 00:15:21.832 "nguid": "893E1C1A55404E2BB3C07A35464CE11E", 00:15:21.832 "uuid": "893e1c1a-5540-4e2b-b3c0-7a35464ce11e" 00:15:21.832 }, 00:15:21.832 { 00:15:21.832 "nsid": 2, 00:15:21.832 "bdev_name": "Malloc3", 00:15:21.832 "name": "Malloc3", 00:15:21.832 "nguid": "3C95755FE9C64A9B9D99C0568BC1AFD2", 00:15:21.832 "uuid": "3c95755f-e9c6-4a9b-9d99-c0568bc1afd2" 00:15:21.833 } 00:15:21.833 ] 00:15:21.833 }, 00:15:21.833 { 00:15:21.833 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.833 "subtype": "NVMe", 00:15:21.833 "listen_addresses": [ 00:15:21.833 { 00:15:21.833 "trtype": "VFIOUSER", 00:15:21.833 "adrfam": "IPv4", 00:15:21.833 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.833 "trsvcid": "0" 00:15:21.833 } 00:15:21.833 ], 00:15:21.833 "allow_any_host": true, 00:15:21.833 "hosts": [], 00:15:21.833 "serial_number": "SPDK2", 00:15:21.833 "model_number": "SPDK bdev Controller", 00:15:21.833 "max_namespaces": 32, 00:15:21.833 "min_cntlid": 1, 00:15:21.833 "max_cntlid": 65519, 00:15:21.833 "namespaces": [ 00:15:21.833 { 00:15:21.833 "nsid": 1, 00:15:21.833 "bdev_name": "Malloc2", 00:15:21.833 "name": "Malloc2", 00:15:21.833 "nguid": "AD97BE04AC474565841BC20F1E597BF5", 00:15:21.833 "uuid": "ad97be04-ac47-4565-841b-c20f1e597bf5" 00:15:21.833 }, 00:15:21.833 { 00:15:21.833 "nsid": 2, 00:15:21.833 "bdev_name": "Malloc4", 00:15:21.833 "name": "Malloc4", 00:15:21.833 "nguid": "5CC80E1BC08F46F0925F0AFC928922AA", 00:15:21.833 "uuid": "5cc80e1b-c08f-46f0-925f-0afc928922aa" 00:15:21.833 } 00:15:21.833 ] 00:15:21.833 } 00:15:21.833 ] 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1571117 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1562990 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1562990 ']' 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1562990 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1562990 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1562990' 00:15:21.833 killing process with pid 1562990 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1562990 00:15:21.833 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1562990 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1571351 00:15:22.091 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1571351' 00:15:22.092 Process pid: 1571351 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1571351 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1571351 ']' 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.092 19:21:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:22.351 [2024-07-15 19:21:32.963034] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:22.351 [2024-07-15 19:21:32.963899] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:22.351 [2024-07-15 19:21:32.963936] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.351 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.351 [2024-07-15 19:21:32.990702] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:22.351 [2024-07-15 19:21:33.018761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.351 [2024-07-15 19:21:33.056089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.351 [2024-07-15 19:21:33.056131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.351 [2024-07-15 19:21:33.056139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.351 [2024-07-15 19:21:33.056145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.351 [2024-07-15 19:21:33.056151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.351 [2024-07-15 19:21:33.056256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.351 [2024-07-15 19:21:33.056298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.351 [2024-07-15 19:21:33.056370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.351 [2024-07-15 19:21:33.056371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.351 [2024-07-15 19:21:33.127921] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:22.351 [2024-07-15 19:21:33.128014] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:22.351 [2024-07-15 19:21:33.128245] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:22.351 [2024-07-15 19:21:33.128581] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:22.351 [2024-07-15 19:21:33.128827] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:22.351 19:21:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.351 19:21:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:22.351 19:21:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:23.745 Malloc1 00:15:23.745 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:24.003 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:24.260 19:21:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:24.260 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:24.261 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:24.261 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:24.518 Malloc2 00:15:24.519 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:24.778 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1571351 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1571351 ']' 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1571351 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571351 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571351' 00:15:25.037 killing process with pid 1571351 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1571351 00:15:25.037 19:21:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1571351 00:15:25.296 19:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:25.296 19:21:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:25.296 00:15:25.296 real 0m50.081s 00:15:25.296 user 3m18.427s 00:15:25.296 sys 0m3.414s 00:15:25.296 19:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.296 19:21:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.296 ************************************ 00:15:25.296 END TEST nvmf_vfio_user 00:15:25.296 ************************************ 00:15:25.296 19:21:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.296 19:21:36 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.296 19:21:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.296 19:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.296 19:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.296 ************************************ 00:15:25.296 START TEST nvmf_vfio_user_nvme_compliance 00:15:25.296 ************************************ 00:15:25.296 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.555 * Looking for test storage... 00:15:25.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1571887 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1571887' 00:15:25.555 Process pid: 1571887 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1571887 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1571887 ']' 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.555 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.556 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.556 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.556 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.556 [2024-07-15 19:21:36.290049] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:15:25.556 [2024-07-15 19:21:36.290090] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.556 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.556 [2024-07-15 19:21:36.316398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:25.556 [2024-07-15 19:21:36.344507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.556 [2024-07-15 19:21:36.383832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.556 [2024-07-15 19:21:36.383873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.556 [2024-07-15 19:21:36.383880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.556 [2024-07-15 19:21:36.383886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.556 [2024-07-15 19:21:36.383890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.556 [2024-07-15 19:21:36.383986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.556 [2024-07-15 19:21:36.384098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.556 [2024-07-15 19:21:36.384098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.814 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.814 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:25.814 19:21:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 malloc0 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.749 19:21:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:26.749 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.008 00:15:27.008 00:15:27.008 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.008 http://cunit.sourceforge.net/ 00:15:27.008 00:15:27.008 00:15:27.008 Suite: nvme_compliance 00:15:27.008 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 19:21:37.693658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.008 [2024-07-15 19:21:37.695003] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:27.008 [2024-07-15 19:21:37.695017] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:27.008 [2024-07-15 19:21:37.695023] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:27.008 [2024-07-15 19:21:37.696682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.008 passed 00:15:27.008 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 19:21:37.776228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.008 [2024-07-15 19:21:37.779243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.008 passed 00:15:27.008 Test: admin_identify_ns ...[2024-07-15 19:21:37.858676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.291 [2024-07-15 19:21:37.922246] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:27.291 [2024-07-15 19:21:37.930237] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:27.291 [2024-07-15 19:21:37.951331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.291 passed 00:15:27.291 Test: admin_get_features_mandatory_features ...[2024-07-15 19:21:38.026478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.291 [2024-07-15 19:21:38.029504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.291 passed 00:15:27.291 Test: admin_get_features_optional_features ...[2024-07-15 19:21:38.105990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.291 [2024-07-15 19:21:38.111016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.291 passed 00:15:27.549 Test: admin_set_features_number_of_queues ...[2024-07-15 19:21:38.189865] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.549 [2024-07-15 19:21:38.295324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.549 passed 00:15:27.549 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 19:21:38.370327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.549 [2024-07-15 19:21:38.373353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.549 passed 00:15:27.808 Test: admin_get_log_page_with_lpo ...[2024-07-15 19:21:38.450118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.808 [2024-07-15 19:21:38.518235] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:27.808 [2024-07-15 19:21:38.531294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.808 passed 00:15:27.808 Test: fabric_property_get ...[2024-07-15 19:21:38.608266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.808 [2024-07-15 19:21:38.609504] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:27.808 [2024-07-15 19:21:38.611286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.808 passed 00:15:28.066 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 19:21:38.689785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.066 [2024-07-15 19:21:38.691014] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:28.066 [2024-07-15 19:21:38.692809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.066 passed 00:15:28.066 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 19:21:38.771599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.066 [2024-07-15 19:21:38.856230] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.066 [2024-07-15 19:21:38.872231] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.066 [2024-07-15 19:21:38.877324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.066 passed 00:15:28.324 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 19:21:38.952265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.324 [2024-07-15 19:21:38.953485] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:28.324 [2024-07-15 19:21:38.955287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.324 passed 00:15:28.324 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 19:21:39.033046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.324 [2024-07-15 19:21:39.108230] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.324 [2024-07-15 19:21:39.132228] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.324 [2024-07-15 19:21:39.137317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.324 passed 00:15:28.583 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 19:21:39.214096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.583 [2024-07-15 19:21:39.215329] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:28.583 [2024-07-15 19:21:39.215352] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:28.583 [2024-07-15 19:21:39.217122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.583 passed 00:15:28.583 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 19:21:39.294876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.583 [2024-07-15 19:21:39.387233] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:28.583 [2024-07-15 19:21:39.395243] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:28.583 [2024-07-15 19:21:39.403244] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:28.583 [2024-07-15 19:21:39.411237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:28.841 [2024-07-15 19:21:39.440324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.841 passed 00:15:28.841 Test: admin_create_io_sq_verify_pc ...[2024-07-15 19:21:39.516374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.841 [2024-07-15 19:21:39.534239] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:28.841 [2024-07-15 19:21:39.551554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.841 passed 00:15:28.841 Test: admin_create_io_qp_max_qps ...[2024-07-15 19:21:39.627078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.217 [2024-07-15 19:21:40.717236] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:30.476 [2024-07-15 19:21:41.096424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.476 passed 00:15:30.476 Test: admin_create_io_sq_shared_cq ...[2024-07-15 19:21:41.173447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.476 [2024-07-15 19:21:41.306230] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:30.735 [2024-07-15 19:21:41.343287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.735 passed 00:15:30.735 00:15:30.735 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.735 suites 1 1 n/a 0 0 00:15:30.735 tests 18 18 18 0 0 00:15:30.735 asserts 360 360 360 0 n/a 00:15:30.735 00:15:30.735 Elapsed time = 1.497 seconds 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1571887 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1571887 ']' 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1571887 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571887 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571887' 00:15:30.735 killing process with pid 1571887 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1571887 00:15:30.735 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1571887 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:30.994 00:15:30.994 real 0m5.490s 00:15:30.994 user 0m15.554s 00:15:30.994 sys 0m0.434s 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.994 ************************************ 00:15:30.994 END TEST nvmf_vfio_user_nvme_compliance 00:15:30.994 ************************************ 00:15:30.994 19:21:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.994 19:21:41 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.994 19:21:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.994 19:21:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.994 19:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.994 ************************************ 00:15:30.994 START TEST nvmf_vfio_user_fuzz 00:15:30.994 ************************************ 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:30.994 * Looking for test storage... 00:15:30.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.994 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1572867 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1572867' 00:15:30.995 Process pid: 1572867 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1572867 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1572867 ']' 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.995 19:21:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.254 19:21:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.254 19:21:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:31.254 19:21:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:32.189 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:32.189 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.189 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 malloc0 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:32.448 19:21:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:04.547 Fuzzing completed. Shutting down the fuzz application 00:16:04.547 00:16:04.547 Dumping successful admin opcodes: 00:16:04.547 8, 9, 10, 24, 00:16:04.547 Dumping successful io opcodes: 00:16:04.547 0, 00:16:04.547 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1004982, total successful commands: 3939, random_seed: 4151235520 00:16:04.547 NS: 0x200003a1ef00 admin qp, Total commands completed: 242848, total successful commands: 1953, random_seed: 45151232 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1572867 ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1572867' 00:16:04.547 killing process with pid 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1572867 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:04.547 00:16:04.547 real 0m32.046s 00:16:04.547 user 0m30.015s 00:16:04.547 sys 0m30.447s 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.547 19:22:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 ************************************ 00:16:04.547 END TEST nvmf_vfio_user_fuzz 00:16:04.547 ************************************ 00:16:04.547 19:22:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.547 19:22:13 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:04.547 19:22:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.547 19:22:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.547 19:22:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 ************************************ 00:16:04.547 START TEST nvmf_host_management 00:16:04.547 ************************************ 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:04.547 * Looking for test storage... 00:16:04.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.547 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.548 19:22:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:07.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:07.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.833 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:07.834 Found net devices under 0000:86:00.0: cvl_0_0 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:07.834 Found net devices under 0000:86:00.1: cvl_0_1 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:16:07.834 00:16:07.834 --- 10.0.0.2 ping statistics --- 00:16:07.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.834 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:16:07.834 00:16:07.834 --- 10.0.0.1 ping statistics --- 00:16:07.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.834 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1581035 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1581035 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1581035 ']' 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:07.834 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.834 [2024-07-15 19:22:18.577813] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:07.834 [2024-07-15 19:22:18.577856] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.834 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.834 [2024-07-15 19:22:18.608195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:07.834 [2024-07-15 19:22:18.636538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.834 [2024-07-15 19:22:18.679832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.834 [2024-07-15 19:22:18.679870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.834 [2024-07-15 19:22:18.679877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.834 [2024-07-15 19:22:18.679884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.834 [2024-07-15 19:22:18.679889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.834 [2024-07-15 19:22:18.679928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.834 [2024-07-15 19:22:18.680012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.834 [2024-07-15 19:22:18.680120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.834 [2024-07-15 19:22:18.680121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 [2024-07-15 19:22:18.830355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 Malloc0 00:16:08.094 [2024-07-15 19:22:18.890032] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1581183 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1581183 /var/tmp/bdevperf.sock 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1581183 ']' 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:08.094 { 00:16:08.094 "params": { 00:16:08.094 "name": "Nvme$subsystem", 00:16:08.094 "trtype": "$TEST_TRANSPORT", 00:16:08.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.094 "adrfam": "ipv4", 00:16:08.094 "trsvcid": "$NVMF_PORT", 00:16:08.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.094 "hdgst": ${hdgst:-false}, 00:16:08.094 "ddgst": ${ddgst:-false} 00:16:08.094 }, 00:16:08.094 "method": "bdev_nvme_attach_controller" 00:16:08.094 } 00:16:08.094 EOF 00:16:08.094 )") 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:08.094 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:08.353 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:08.353 19:22:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:08.353 "params": { 00:16:08.353 "name": "Nvme0", 00:16:08.353 "trtype": "tcp", 00:16:08.353 "traddr": "10.0.0.2", 00:16:08.353 "adrfam": "ipv4", 00:16:08.353 "trsvcid": "4420", 00:16:08.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:08.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:08.353 "hdgst": false, 00:16:08.353 "ddgst": false 00:16:08.353 }, 00:16:08.353 "method": "bdev_nvme_attach_controller" 00:16:08.353 }' 00:16:08.353 [2024-07-15 19:22:18.980665] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:08.353 [2024-07-15 19:22:18.980707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581183 ] 00:16:08.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.354 [2024-07-15 19:22:19.007141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:08.354 [2024-07-15 19:22:19.036058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.354 [2024-07-15 19:22:19.075995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.612 Running I/O for 10 seconds... 00:16:08.612 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.612 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:08.613 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.874 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.874 [2024-07-15 19:22:19.640496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.874 [2024-07-15 19:22:19.640681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.640913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d0d0 is same with the state(5) to be set 00:16:08.875 [2024-07-15 19:22:19.642082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.875 [2024-07-15 19:22:19.642410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.875 [2024-07-15 19:22:19.642420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.642988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.642996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.876 [2024-07-15 19:22:19.643132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.876 [2024-07-15 19:22:19.643142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.877 [2024-07-15 19:22:19.643284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.877 [2024-07-15 19:22:19.643378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbda270 was disconnected and freed. reset controller. 00:16:08.877 [2024-07-15 19:22:19.644306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:08.877 task offset: 75392 on job bdev=Nvme0n1 fails 00:16:08.877 00:16:08.877 Latency(us) 00:16:08.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.877 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:08.877 Job: Nvme0n1 ended in about 0.42 seconds with error 00:16:08.877 Verification LBA range: start 0x0 length 0x400 00:16:08.877 Nvme0n1 : 0.42 1407.69 87.98 152.96 0.00 39974.85 1588.54 34192.70 00:16:08.877 =================================================================================================================== 00:16:08.877 Total : 1407.69 87.98 152.96 0.00 39974.85 1588.54 34192.70 00:16:08.877 [2024-07-15 19:22:19.645928] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:08.877 [2024-07-15 19:22:19.645947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c90d0 (9): Bad file descriptor 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.877 19:22:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:08.877 [2024-07-15 19:22:19.695478] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:09.814 19:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1581183 00:16:09.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1581183) - No such process 00:16:09.814 19:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:09.814 19:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.815 { 00:16:09.815 "params": { 00:16:09.815 "name": "Nvme$subsystem", 00:16:09.815 "trtype": "$TEST_TRANSPORT", 00:16:09.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.815 "adrfam": "ipv4", 00:16:09.815 "trsvcid": "$NVMF_PORT", 00:16:09.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.815 "hdgst": ${hdgst:-false}, 00:16:09.815 "ddgst": ${ddgst:-false} 00:16:09.815 }, 00:16:09.815 "method": "bdev_nvme_attach_controller" 00:16:09.815 } 00:16:09.815 EOF 00:16:09.815 )") 00:16:09.815 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:10.119 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:10.119 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:10.119 19:22:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:10.119 "params": { 00:16:10.119 "name": "Nvme0", 00:16:10.119 "trtype": "tcp", 00:16:10.119 "traddr": "10.0.0.2", 00:16:10.119 "adrfam": "ipv4", 00:16:10.119 "trsvcid": "4420", 00:16:10.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:10.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:10.119 "hdgst": false, 00:16:10.119 "ddgst": false 00:16:10.119 }, 00:16:10.119 "method": "bdev_nvme_attach_controller" 00:16:10.119 }' 00:16:10.119 [2024-07-15 19:22:20.708151] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:10.119 [2024-07-15 19:22:20.708199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581432 ] 00:16:10.119 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.119 [2024-07-15 19:22:20.733983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.119 [2024-07-15 19:22:20.763294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.119 [2024-07-15 19:22:20.803600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.398 Running I/O for 1 seconds... 00:16:11.334 00:16:11.334 Latency(us) 00:16:11.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:11.334 Verification LBA range: start 0x0 length 0x400 00:16:11.334 Nvme0n1 : 1.02 1881.91 117.62 0.00 0.00 33399.03 5328.36 31457.28 00:16:11.334 =================================================================================================================== 00:16:11.334 Total : 1881.91 117.62 0.00 0.00 33399.03 5328.36 31457.28 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.593 rmmod nvme_tcp 00:16:11.593 rmmod nvme_fabrics 00:16:11.593 rmmod nvme_keyring 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1581035 ']' 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1581035 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1581035 ']' 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1581035 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1581035 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1581035' 00:16:11.593 killing process with pid 1581035 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1581035 00:16:11.593 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1581035 00:16:11.852 [2024-07-15 19:22:22.550005] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.852 19:22:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.388 19:22:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.388 19:22:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:14.388 00:16:14.388 real 0m10.837s 00:16:14.388 user 0m18.609s 00:16:14.388 sys 0m4.500s 00:16:14.388 19:22:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.388 19:22:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:14.388 ************************************ 00:16:14.388 END TEST nvmf_host_management 00:16:14.388 ************************************ 00:16:14.388 19:22:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.388 19:22:24 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:14.388 19:22:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.388 19:22:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.388 19:22:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.388 ************************************ 00:16:14.388 START TEST nvmf_lvol 00:16:14.388 ************************************ 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:14.388 * Looking for test storage... 00:16:14.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.388 19:22:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.659 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.659 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.659 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:16:19.660 00:16:19.660 --- 10.0.0.2 ping statistics --- 00:16:19.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.660 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:16:19.660 00:16:19.660 --- 10.0.0.1 ping statistics --- 00:16:19.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.660 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1585186 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1585186 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1585186 ']' 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.660 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:19.660 [2024-07-15 19:22:30.426029] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:19.660 [2024-07-15 19:22:30.426071] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.660 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.660 [2024-07-15 19:22:30.456966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.660 [2024-07-15 19:22:30.485773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:19.918 [2024-07-15 19:22:30.527107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.918 [2024-07-15 19:22:30.527146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.918 [2024-07-15 19:22:30.527153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.918 [2024-07-15 19:22:30.527160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.918 [2024-07-15 19:22:30.527165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.918 [2024-07-15 19:22:30.527203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.918 [2024-07-15 19:22:30.527308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.918 [2024-07-15 19:22:30.527311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.918 19:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:20.177 [2024-07-15 19:22:30.813836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.177 19:22:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.435 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:20.435 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.435 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:20.435 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:20.692 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:20.951 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b9c7df78-c530-4294-9af1-5edb32e68563 00:16:20.951 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9c7df78-c530-4294-9af1-5edb32e68563 lvol 20 00:16:20.951 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1c4e46a1-4cd6-48fd-b498-4693d2c161d0 00:16:20.951 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:21.209 19:22:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c4e46a1-4cd6-48fd-b498-4693d2c161d0 00:16:21.468 19:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:21.468 [2024-07-15 19:22:32.317507] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.726 19:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:21.726 19:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1585579 00:16:21.726 19:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:21.726 19:22:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:21.726 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.101 19:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1c4e46a1-4cd6-48fd-b498-4693d2c161d0 MY_SNAPSHOT 00:16:23.101 19:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b9c9ce9b-7236-482a-9c0d-20f988ed7aba 00:16:23.101 19:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1c4e46a1-4cd6-48fd-b498-4693d2c161d0 30 00:16:23.360 19:22:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b9c9ce9b-7236-482a-9c0d-20f988ed7aba MY_CLONE 00:16:23.618 19:22:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=007741b0-e68e-49a9-add1-7ba256dbcde1 00:16:23.618 19:22:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 007741b0-e68e-49a9-add1-7ba256dbcde1 00:16:24.184 19:22:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1585579 00:16:32.303 Initializing NVMe Controllers 00:16:32.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:32.303 Controller IO queue size 128, less than required. 00:16:32.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:32.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:32.303 Initialization complete. Launching workers. 00:16:32.303 ======================================================== 00:16:32.303 Latency(us) 00:16:32.303 Device Information : IOPS MiB/s Average min max 00:16:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12250.30 47.85 10451.62 1208.78 68374.64 00:16:32.303 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12142.30 47.43 10542.29 3778.05 67011.47 00:16:32.303 ======================================================== 00:16:32.303 Total : 24392.60 95.28 10496.75 1208.78 68374.64 00:16:32.303 00:16:32.303 19:22:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:32.303 19:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1c4e46a1-4cd6-48fd-b498-4693d2c161d0 00:16:32.560 19:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9c7df78-c530-4294-9af1-5edb32e68563 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.819 rmmod nvme_tcp 00:16:32.819 rmmod nvme_fabrics 00:16:32.819 rmmod nvme_keyring 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1585186 ']' 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1585186 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1585186 ']' 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1585186 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1585186 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1585186' 00:16:32.819 killing process with pid 1585186 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1585186 00:16:32.819 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1585186 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.078 19:22:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.046 19:22:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:35.046 00:16:35.046 real 0m21.118s 00:16:35.046 user 1m2.267s 00:16:35.046 sys 0m6.732s 00:16:35.046 19:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.046 19:22:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:35.046 ************************************ 00:16:35.046 END TEST nvmf_lvol 00:16:35.046 ************************************ 00:16:35.046 19:22:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:35.046 19:22:45 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:35.046 19:22:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:35.046 19:22:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.046 19:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:35.046 ************************************ 00:16:35.046 START TEST nvmf_lvs_grow 00:16:35.046 ************************************ 00:16:35.046 19:22:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:35.306 * Looking for test storage... 00:16:35.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.306 19:22:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.306 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.307 19:22:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:40.580 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:40.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:40.580 Found net devices under 0000:86:00.0: cvl_0_0 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:40.580 Found net devices under 0000:86:00.1: cvl_0_1 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.580 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:16:40.581 00:16:40.581 --- 10.0.0.2 ping statistics --- 00:16:40.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.581 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:16:40.581 00:16:40.581 --- 10.0.0.1 ping statistics --- 00:16:40.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.581 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1590800 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1590800 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1590800 ']' 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:40.581 19:22:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 [2024-07-15 19:22:51.043282] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:40.581 [2024-07-15 19:22:51.043327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.581 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.581 [2024-07-15 19:22:51.072710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:40.581 [2024-07-15 19:22:51.100156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.581 [2024-07-15 19:22:51.140208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.581 [2024-07-15 19:22:51.140249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.581 [2024-07-15 19:22:51.140256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.581 [2024-07-15 19:22:51.140262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.581 [2024-07-15 19:22:51.140268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.581 [2024-07-15 19:22:51.140289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:40.581 [2024-07-15 19:22:51.416144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.581 19:22:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:40.840 ************************************ 00:16:40.840 START TEST lvs_grow_clean 00:16:40.840 ************************************ 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:40.840 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:41.099 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:41.099 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:41.099 19:22:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:41.357 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:41.358 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:41.358 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65ce0acf-8a4b-4644-9018-f5241621a0ea lvol 150 00:16:41.358 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=25896827-847a-42d6-8f07-d9d42036d5a4 00:16:41.358 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:41.358 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:41.616 [2024-07-15 19:22:52.344927] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:41.616 [2024-07-15 19:22:52.344976] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:41.616 true 00:16:41.616 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:41.616 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:41.875 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:41.875 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:41.875 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25896827-847a-42d6-8f07-d9d42036d5a4 00:16:42.134 19:22:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:42.393 [2024-07-15 19:22:53.018942] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1591084 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1591084 /var/tmp/bdevperf.sock 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1591084 ']' 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.393 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:42.393 [2024-07-15 19:22:53.247568] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:42.393 [2024-07-15 19:22:53.247618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591084 ] 00:16:42.652 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.652 [2024-07-15 19:22:53.274455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.652 [2024-07-15 19:22:53.302018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.652 [2024-07-15 19:22:53.343215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.652 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.652 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:42.652 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:42.909 Nvme0n1 00:16:42.909 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:43.167 [ 00:16:43.167 { 00:16:43.167 "name": "Nvme0n1", 00:16:43.167 "aliases": [ 00:16:43.167 "25896827-847a-42d6-8f07-d9d42036d5a4" 00:16:43.167 ], 00:16:43.167 "product_name": "NVMe disk", 00:16:43.167 "block_size": 4096, 00:16:43.167 "num_blocks": 38912, 00:16:43.167 "uuid": "25896827-847a-42d6-8f07-d9d42036d5a4", 00:16:43.167 "assigned_rate_limits": { 00:16:43.167 "rw_ios_per_sec": 0, 00:16:43.167 "rw_mbytes_per_sec": 0, 00:16:43.167 "r_mbytes_per_sec": 0, 00:16:43.167 "w_mbytes_per_sec": 0 00:16:43.167 }, 00:16:43.167 "claimed": false, 00:16:43.167 "zoned": false, 00:16:43.167 "supported_io_types": { 00:16:43.167 "read": true, 00:16:43.167 "write": true, 00:16:43.167 "unmap": true, 00:16:43.167 "flush": true, 00:16:43.167 "reset": true, 00:16:43.167 "nvme_admin": true, 00:16:43.167 "nvme_io": true, 00:16:43.167 "nvme_io_md": false, 00:16:43.167 "write_zeroes": true, 00:16:43.167 "zcopy": false, 00:16:43.167 "get_zone_info": false, 00:16:43.167 "zone_management": false, 00:16:43.167 "zone_append": false, 00:16:43.167 "compare": true, 00:16:43.167 "compare_and_write": true, 00:16:43.167 "abort": true, 00:16:43.167 "seek_hole": false, 00:16:43.167 "seek_data": false, 00:16:43.167 "copy": true, 00:16:43.167 "nvme_iov_md": false 00:16:43.167 }, 00:16:43.167 "memory_domains": [ 00:16:43.167 { 00:16:43.167 "dma_device_id": "system", 00:16:43.167 "dma_device_type": 1 00:16:43.167 } 00:16:43.167 ], 00:16:43.167 "driver_specific": { 00:16:43.167 "nvme": [ 00:16:43.167 { 00:16:43.167 "trid": { 00:16:43.167 "trtype": "TCP", 00:16:43.167 "adrfam": "IPv4", 00:16:43.167 "traddr": "10.0.0.2", 00:16:43.167 "trsvcid": "4420", 00:16:43.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:43.167 }, 00:16:43.167 "ctrlr_data": { 00:16:43.167 "cntlid": 1, 00:16:43.167 "vendor_id": "0x8086", 00:16:43.167 "model_number": "SPDK bdev Controller", 00:16:43.167 "serial_number": "SPDK0", 00:16:43.167 "firmware_revision": "24.09", 00:16:43.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.167 "oacs": { 00:16:43.167 "security": 0, 00:16:43.167 "format": 0, 00:16:43.167 "firmware": 0, 00:16:43.167 "ns_manage": 0 00:16:43.167 }, 00:16:43.167 "multi_ctrlr": true, 00:16:43.167 "ana_reporting": false 00:16:43.167 }, 00:16:43.167 "vs": { 00:16:43.167 "nvme_version": "1.3" 00:16:43.167 }, 00:16:43.167 "ns_data": { 00:16:43.167 "id": 1, 00:16:43.167 "can_share": true 00:16:43.167 } 00:16:43.167 } 00:16:43.167 ], 00:16:43.167 "mp_policy": "active_passive" 00:16:43.167 } 00:16:43.167 } 00:16:43.167 ] 00:16:43.167 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1591306 00:16:43.167 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:43.167 19:22:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:43.167 Running I/O for 10 seconds... 00:16:44.103 Latency(us) 00:16:44.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.103 Nvme0n1 : 1.00 22979.00 89.76 0.00 0.00 0.00 0.00 0.00 00:16:44.103 =================================================================================================================== 00:16:44.103 Total : 22979.00 89.76 0.00 0.00 0.00 0.00 0.00 00:16:44.103 00:16:45.039 19:22:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:45.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.297 Nvme0n1 : 2.00 23126.50 90.34 0.00 0.00 0.00 0.00 0.00 00:16:45.297 =================================================================================================================== 00:16:45.297 Total : 23126.50 90.34 0.00 0.00 0.00 0.00 0.00 00:16:45.297 00:16:45.297 true 00:16:45.297 19:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:45.297 19:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:45.556 19:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:45.556 19:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:45.556 19:22:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1591306 00:16:46.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.122 Nvme0n1 : 3.00 23173.33 90.52 0.00 0.00 0.00 0.00 0.00 00:16:46.122 =================================================================================================================== 00:16:46.122 Total : 23173.33 90.52 0.00 0.00 0.00 0.00 0.00 00:16:46.122 00:16:47.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.497 Nvme0n1 : 4.00 23235.25 90.76 0.00 0.00 0.00 0.00 0.00 00:16:47.497 =================================================================================================================== 00:16:47.497 Total : 23235.25 90.76 0.00 0.00 0.00 0.00 0.00 00:16:47.497 00:16:48.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.433 Nvme0n1 : 5.00 23286.00 90.96 0.00 0.00 0.00 0.00 0.00 00:16:48.433 =================================================================================================================== 00:16:48.433 Total : 23286.00 90.96 0.00 0.00 0.00 0.00 0.00 00:16:48.433 00:16:49.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.369 Nvme0n1 : 6.00 23319.50 91.09 0.00 0.00 0.00 0.00 0.00 00:16:49.369 =================================================================================================================== 00:16:49.369 Total : 23319.50 91.09 0.00 0.00 0.00 0.00 0.00 00:16:49.369 00:16:50.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.302 Nvme0n1 : 7.00 23334.43 91.15 0.00 0.00 0.00 0.00 0.00 00:16:50.302 =================================================================================================================== 00:16:50.302 Total : 23334.43 91.15 0.00 0.00 0.00 0.00 0.00 00:16:50.302 00:16:51.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.236 Nvme0n1 : 8.00 23361.75 91.26 0.00 0.00 0.00 0.00 0.00 00:16:51.236 =================================================================================================================== 00:16:51.236 Total : 23361.75 91.26 0.00 0.00 0.00 0.00 0.00 00:16:51.236 00:16:52.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.291 Nvme0n1 : 9.00 23382.89 91.34 0.00 0.00 0.00 0.00 0.00 00:16:52.291 =================================================================================================================== 00:16:52.291 Total : 23382.89 91.34 0.00 0.00 0.00 0.00 0.00 00:16:52.291 00:16:53.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.294 Nvme0n1 : 10.00 23339.70 91.17 0.00 0.00 0.00 0.00 0.00 00:16:53.294 =================================================================================================================== 00:16:53.294 Total : 23339.70 91.17 0.00 0.00 0.00 0.00 0.00 00:16:53.294 00:16:53.294 00:16:53.294 Latency(us) 00:16:53.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.294 Nvme0n1 : 10.00 23341.44 91.18 0.00 0.00 5480.35 3305.29 12537.32 00:16:53.294 =================================================================================================================== 00:16:53.294 Total : 23341.44 91.18 0.00 0.00 5480.35 3305.29 12537.32 00:16:53.294 0 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1591084 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1591084 ']' 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1591084 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.294 19:23:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1591084 00:16:53.294 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:53.294 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:53.294 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1591084' 00:16:53.294 killing process with pid 1591084 00:16:53.295 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1591084 00:16:53.295 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.295 00:16:53.295 Latency(us) 00:16:53.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.295 =================================================================================================================== 00:16:53.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.295 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1591084 00:16:53.553 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.553 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.812 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:53.812 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:54.071 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:54.071 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:54.071 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:54.071 [2024-07-15 19:23:04.918715] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:54.329 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.330 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:54.330 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:54.330 19:23:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:54.330 request: 00:16:54.330 { 00:16:54.330 "uuid": "65ce0acf-8a4b-4644-9018-f5241621a0ea", 00:16:54.330 "method": "bdev_lvol_get_lvstores", 00:16:54.330 "req_id": 1 00:16:54.330 } 00:16:54.330 Got JSON-RPC error response 00:16:54.330 response: 00:16:54.330 { 00:16:54.330 "code": -19, 00:16:54.330 "message": "No such device" 00:16:54.330 } 00:16:54.330 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:54.330 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:54.330 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:54.330 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:54.330 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.588 aio_bdev 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 25896827-847a-42d6-8f07-d9d42036d5a4 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=25896827-847a-42d6-8f07-d9d42036d5a4 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.588 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.847 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 25896827-847a-42d6-8f07-d9d42036d5a4 -t 2000 00:16:54.847 [ 00:16:54.847 { 00:16:54.847 "name": "25896827-847a-42d6-8f07-d9d42036d5a4", 00:16:54.847 "aliases": [ 00:16:54.847 "lvs/lvol" 00:16:54.847 ], 00:16:54.847 "product_name": "Logical Volume", 00:16:54.847 "block_size": 4096, 00:16:54.847 "num_blocks": 38912, 00:16:54.847 "uuid": "25896827-847a-42d6-8f07-d9d42036d5a4", 00:16:54.847 "assigned_rate_limits": { 00:16:54.847 "rw_ios_per_sec": 0, 00:16:54.847 "rw_mbytes_per_sec": 0, 00:16:54.847 "r_mbytes_per_sec": 0, 00:16:54.847 "w_mbytes_per_sec": 0 00:16:54.847 }, 00:16:54.847 "claimed": false, 00:16:54.847 "zoned": false, 00:16:54.847 "supported_io_types": { 00:16:54.847 "read": true, 00:16:54.847 "write": true, 00:16:54.847 "unmap": true, 00:16:54.847 "flush": false, 00:16:54.847 "reset": true, 00:16:54.847 "nvme_admin": false, 00:16:54.847 "nvme_io": false, 00:16:54.847 "nvme_io_md": false, 00:16:54.847 "write_zeroes": true, 00:16:54.847 "zcopy": false, 00:16:54.848 "get_zone_info": false, 00:16:54.848 "zone_management": false, 00:16:54.848 "zone_append": false, 00:16:54.848 "compare": false, 00:16:54.848 "compare_and_write": false, 00:16:54.848 "abort": false, 00:16:54.848 "seek_hole": true, 00:16:54.848 "seek_data": true, 00:16:54.848 "copy": false, 00:16:54.848 "nvme_iov_md": false 00:16:54.848 }, 00:16:54.848 "driver_specific": { 00:16:54.848 "lvol": { 00:16:54.848 "lvol_store_uuid": "65ce0acf-8a4b-4644-9018-f5241621a0ea", 00:16:54.848 "base_bdev": "aio_bdev", 00:16:54.848 "thin_provision": false, 00:16:54.848 "num_allocated_clusters": 38, 00:16:54.848 "snapshot": false, 00:16:54.848 "clone": false, 00:16:54.848 "esnap_clone": false 00:16:54.848 } 00:16:54.848 } 00:16:54.848 } 00:16:54.848 ] 00:16:54.848 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:54.848 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:54.848 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:55.106 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:55.106 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:55.106 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:55.365 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:55.365 19:23:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25896827-847a-42d6-8f07-d9d42036d5a4 00:16:55.365 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65ce0acf-8a4b-4644-9018-f5241621a0ea 00:16:55.625 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.883 00:16:55.883 real 0m15.069s 00:16:55.883 user 0m14.621s 00:16:55.883 sys 0m1.381s 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:55.883 ************************************ 00:16:55.883 END TEST lvs_grow_clean 00:16:55.883 ************************************ 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:55.883 ************************************ 00:16:55.883 START TEST lvs_grow_dirty 00:16:55.883 ************************************ 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.883 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:56.142 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:56.142 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:56.142 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:16:56.142 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:16:56.142 19:23:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:56.400 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:56.400 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:56.400 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a lvol 150 00:16:56.659 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=50b410cd-f579-4c22-9670-1031cec865cf 00:16:56.659 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:56.659 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:56.659 [2024-07-15 19:23:07.479859] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:56.659 [2024-07-15 19:23:07.479911] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:56.659 true 00:16:56.659 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:16:56.659 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:56.917 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:56.917 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:57.175 19:23:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50b410cd-f579-4c22-9670-1031cec865cf 00:16:57.175 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:57.433 [2024-07-15 19:23:08.161867] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.433 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1593670 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1593670 /var/tmp/bdevperf.sock 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1593670 ']' 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.692 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 [2024-07-15 19:23:08.374548] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:16:57.692 [2024-07-15 19:23:08.374590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593670 ] 00:16:57.692 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.692 [2024-07-15 19:23:08.400567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:57.692 [2024-07-15 19:23:08.428036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.692 [2024-07-15 19:23:08.469166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.951 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.951 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:57.951 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:58.210 Nvme0n1 00:16:58.210 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:58.210 [ 00:16:58.210 { 00:16:58.210 "name": "Nvme0n1", 00:16:58.210 "aliases": [ 00:16:58.210 "50b410cd-f579-4c22-9670-1031cec865cf" 00:16:58.210 ], 00:16:58.210 "product_name": "NVMe disk", 00:16:58.210 "block_size": 4096, 00:16:58.210 "num_blocks": 38912, 00:16:58.210 "uuid": "50b410cd-f579-4c22-9670-1031cec865cf", 00:16:58.210 "assigned_rate_limits": { 00:16:58.210 "rw_ios_per_sec": 0, 00:16:58.210 "rw_mbytes_per_sec": 0, 00:16:58.210 "r_mbytes_per_sec": 0, 00:16:58.210 "w_mbytes_per_sec": 0 00:16:58.210 }, 00:16:58.210 "claimed": false, 00:16:58.210 "zoned": false, 00:16:58.210 "supported_io_types": { 00:16:58.210 "read": true, 00:16:58.210 "write": true, 00:16:58.210 "unmap": true, 00:16:58.210 "flush": true, 00:16:58.210 "reset": true, 00:16:58.210 "nvme_admin": true, 00:16:58.210 "nvme_io": true, 00:16:58.210 "nvme_io_md": false, 00:16:58.210 "write_zeroes": true, 00:16:58.210 "zcopy": false, 00:16:58.210 "get_zone_info": false, 00:16:58.210 "zone_management": false, 00:16:58.210 "zone_append": false, 00:16:58.210 "compare": true, 00:16:58.210 "compare_and_write": true, 00:16:58.210 "abort": true, 00:16:58.210 "seek_hole": false, 00:16:58.210 "seek_data": false, 00:16:58.210 "copy": true, 00:16:58.210 "nvme_iov_md": false 00:16:58.210 }, 00:16:58.210 "memory_domains": [ 00:16:58.210 { 00:16:58.210 "dma_device_id": "system", 00:16:58.210 "dma_device_type": 1 00:16:58.210 } 00:16:58.210 ], 00:16:58.210 "driver_specific": { 00:16:58.210 "nvme": [ 00:16:58.210 { 00:16:58.210 "trid": { 00:16:58.210 "trtype": "TCP", 00:16:58.210 "adrfam": "IPv4", 00:16:58.210 "traddr": "10.0.0.2", 00:16:58.210 "trsvcid": "4420", 00:16:58.210 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:58.210 }, 00:16:58.210 "ctrlr_data": { 00:16:58.210 "cntlid": 1, 00:16:58.210 "vendor_id": "0x8086", 00:16:58.210 "model_number": "SPDK bdev Controller", 00:16:58.210 "serial_number": "SPDK0", 00:16:58.210 "firmware_revision": "24.09", 00:16:58.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:58.210 "oacs": { 00:16:58.210 "security": 0, 00:16:58.210 "format": 0, 00:16:58.210 "firmware": 0, 00:16:58.210 "ns_manage": 0 00:16:58.210 }, 00:16:58.210 "multi_ctrlr": true, 00:16:58.210 "ana_reporting": false 00:16:58.210 }, 00:16:58.210 "vs": { 00:16:58.210 "nvme_version": "1.3" 00:16:58.210 }, 00:16:58.210 "ns_data": { 00:16:58.210 "id": 1, 00:16:58.210 "can_share": true 00:16:58.210 } 00:16:58.210 } 00:16:58.210 ], 00:16:58.210 "mp_policy": "active_passive" 00:16:58.210 } 00:16:58.210 } 00:16:58.210 ] 00:16:58.210 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1593883 00:16:58.210 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:58.210 19:23:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.468 Running I/O for 10 seconds... 00:16:59.405 Latency(us) 00:16:59.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.405 Nvme0n1 : 1.00 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:16:59.405 =================================================================================================================== 00:16:59.405 Total : 22078.00 86.24 0.00 0.00 0.00 0.00 0.00 00:16:59.405 00:17:00.340 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:00.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.340 Nvme0n1 : 2.00 22227.00 86.82 0.00 0.00 0.00 0.00 0.00 00:17:00.340 =================================================================================================================== 00:17:00.340 Total : 22227.00 86.82 0.00 0.00 0.00 0.00 0.00 00:17:00.340 00:17:00.340 true 00:17:00.599 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:00.599 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:00.599 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:00.599 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:00.599 19:23:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1593883 00:17:01.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.535 Nvme0n1 : 3.00 22295.33 87.09 0.00 0.00 0.00 0.00 0.00 00:17:01.535 =================================================================================================================== 00:17:01.535 Total : 22295.33 87.09 0.00 0.00 0.00 0.00 0.00 00:17:01.535 00:17:02.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.469 Nvme0n1 : 4.00 22361.50 87.35 0.00 0.00 0.00 0.00 0.00 00:17:02.469 =================================================================================================================== 00:17:02.469 Total : 22361.50 87.35 0.00 0.00 0.00 0.00 0.00 00:17:02.469 00:17:03.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.406 Nvme0n1 : 5.00 22414.00 87.55 0.00 0.00 0.00 0.00 0.00 00:17:03.406 =================================================================================================================== 00:17:03.406 Total : 22414.00 87.55 0.00 0.00 0.00 0.00 0.00 00:17:03.406 00:17:04.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.339 Nvme0n1 : 6.00 22449.00 87.69 0.00 0.00 0.00 0.00 0.00 00:17:04.339 =================================================================================================================== 00:17:04.339 Total : 22449.00 87.69 0.00 0.00 0.00 0.00 0.00 00:17:04.339 00:17:05.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.271 Nvme0n1 : 7.00 22477.43 87.80 0.00 0.00 0.00 0.00 0.00 00:17:05.271 =================================================================================================================== 00:17:05.271 Total : 22477.43 87.80 0.00 0.00 0.00 0.00 0.00 00:17:05.271 00:17:06.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.644 Nvme0n1 : 8.00 22504.75 87.91 0.00 0.00 0.00 0.00 0.00 00:17:06.644 =================================================================================================================== 00:17:06.644 Total : 22504.75 87.91 0.00 0.00 0.00 0.00 0.00 00:17:06.644 00:17:07.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.576 Nvme0n1 : 9.00 22522.44 87.98 0.00 0.00 0.00 0.00 0.00 00:17:07.576 =================================================================================================================== 00:17:07.576 Total : 22522.44 87.98 0.00 0.00 0.00 0.00 0.00 00:17:07.576 00:17:08.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.511 Nvme0n1 : 10.00 22537.40 88.04 0.00 0.00 0.00 0.00 0.00 00:17:08.511 =================================================================================================================== 00:17:08.511 Total : 22537.40 88.04 0.00 0.00 0.00 0.00 0.00 00:17:08.511 00:17:08.511 00:17:08.511 Latency(us) 00:17:08.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.511 Nvme0n1 : 10.01 22537.72 88.04 0.00 0.00 5675.32 4274.09 14132.98 00:17:08.511 =================================================================================================================== 00:17:08.511 Total : 22537.72 88.04 0.00 0.00 5675.32 4274.09 14132.98 00:17:08.511 0 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1593670 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1593670 ']' 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1593670 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1593670 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1593670' 00:17:08.511 killing process with pid 1593670 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1593670 00:17:08.511 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.511 00:17:08.511 Latency(us) 00:17:08.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.511 =================================================================================================================== 00:17:08.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1593670 00:17:08.511 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.768 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:09.026 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:09.026 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1590800 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1590800 00:17:09.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1590800 Killed "${NVMF_APP[@]}" "$@" 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1595539 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1595539 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1595539 ']' 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.283 19:23:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.283 [2024-07-15 19:23:19.988166] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:09.283 [2024-07-15 19:23:19.988212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.283 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.283 [2024-07-15 19:23:20.018639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:09.283 [2024-07-15 19:23:20.048478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.283 [2024-07-15 19:23:20.088385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.283 [2024-07-15 19:23:20.088425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.283 [2024-07-15 19:23:20.088432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.283 [2024-07-15 19:23:20.088438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.283 [2024-07-15 19:23:20.088443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.283 [2024-07-15 19:23:20.088459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.541 [2024-07-15 19:23:20.359457] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:09.541 [2024-07-15 19:23:20.359538] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:09.541 [2024-07-15 19:23:20.359561] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 50b410cd-f579-4c22-9670-1031cec865cf 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=50b410cd-f579-4c22-9670-1031cec865cf 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:09.541 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.798 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50b410cd-f579-4c22-9670-1031cec865cf -t 2000 00:17:10.057 [ 00:17:10.057 { 00:17:10.057 "name": "50b410cd-f579-4c22-9670-1031cec865cf", 00:17:10.057 "aliases": [ 00:17:10.057 "lvs/lvol" 00:17:10.057 ], 00:17:10.057 "product_name": "Logical Volume", 00:17:10.057 "block_size": 4096, 00:17:10.057 "num_blocks": 38912, 00:17:10.057 "uuid": "50b410cd-f579-4c22-9670-1031cec865cf", 00:17:10.057 "assigned_rate_limits": { 00:17:10.057 "rw_ios_per_sec": 0, 00:17:10.057 "rw_mbytes_per_sec": 0, 00:17:10.057 "r_mbytes_per_sec": 0, 00:17:10.057 "w_mbytes_per_sec": 0 00:17:10.057 }, 00:17:10.057 "claimed": false, 00:17:10.057 "zoned": false, 00:17:10.057 "supported_io_types": { 00:17:10.057 "read": true, 00:17:10.057 "write": true, 00:17:10.057 "unmap": true, 00:17:10.057 "flush": false, 00:17:10.057 "reset": true, 00:17:10.057 "nvme_admin": false, 00:17:10.057 "nvme_io": false, 00:17:10.057 "nvme_io_md": false, 00:17:10.057 "write_zeroes": true, 00:17:10.057 "zcopy": false, 00:17:10.057 "get_zone_info": false, 00:17:10.057 "zone_management": false, 00:17:10.057 "zone_append": false, 00:17:10.057 "compare": false, 00:17:10.057 "compare_and_write": false, 00:17:10.057 "abort": false, 00:17:10.057 "seek_hole": true, 00:17:10.057 "seek_data": true, 00:17:10.057 "copy": false, 00:17:10.057 "nvme_iov_md": false 00:17:10.057 }, 00:17:10.057 "driver_specific": { 00:17:10.057 "lvol": { 00:17:10.057 "lvol_store_uuid": "81b23a17-2338-4b2b-8dad-d6eb6769e38a", 00:17:10.057 "base_bdev": "aio_bdev", 00:17:10.057 "thin_provision": false, 00:17:10.057 "num_allocated_clusters": 38, 00:17:10.057 "snapshot": false, 00:17:10.057 "clone": false, 00:17:10.057 "esnap_clone": false 00:17:10.057 } 00:17:10.057 } 00:17:10.057 } 00:17:10.057 ] 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:10.057 19:23:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:10.315 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:10.315 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.574 [2024-07-15 19:23:21.216126] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:10.574 request: 00:17:10.574 { 00:17:10.574 "uuid": "81b23a17-2338-4b2b-8dad-d6eb6769e38a", 00:17:10.574 "method": "bdev_lvol_get_lvstores", 00:17:10.574 "req_id": 1 00:17:10.574 } 00:17:10.574 Got JSON-RPC error response 00:17:10.574 response: 00:17:10.574 { 00:17:10.574 "code": -19, 00:17:10.574 "message": "No such device" 00:17:10.574 } 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.574 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.833 aio_bdev 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 50b410cd-f579-4c22-9670-1031cec865cf 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=50b410cd-f579-4c22-9670-1031cec865cf 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.833 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:11.092 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50b410cd-f579-4c22-9670-1031cec865cf -t 2000 00:17:11.092 [ 00:17:11.092 { 00:17:11.092 "name": "50b410cd-f579-4c22-9670-1031cec865cf", 00:17:11.092 "aliases": [ 00:17:11.092 "lvs/lvol" 00:17:11.092 ], 00:17:11.092 "product_name": "Logical Volume", 00:17:11.092 "block_size": 4096, 00:17:11.092 "num_blocks": 38912, 00:17:11.092 "uuid": "50b410cd-f579-4c22-9670-1031cec865cf", 00:17:11.092 "assigned_rate_limits": { 00:17:11.092 "rw_ios_per_sec": 0, 00:17:11.092 "rw_mbytes_per_sec": 0, 00:17:11.092 "r_mbytes_per_sec": 0, 00:17:11.092 "w_mbytes_per_sec": 0 00:17:11.092 }, 00:17:11.092 "claimed": false, 00:17:11.092 "zoned": false, 00:17:11.092 "supported_io_types": { 00:17:11.092 "read": true, 00:17:11.092 "write": true, 00:17:11.092 "unmap": true, 00:17:11.092 "flush": false, 00:17:11.092 "reset": true, 00:17:11.092 "nvme_admin": false, 00:17:11.092 "nvme_io": false, 00:17:11.092 "nvme_io_md": false, 00:17:11.092 "write_zeroes": true, 00:17:11.092 "zcopy": false, 00:17:11.092 "get_zone_info": false, 00:17:11.092 "zone_management": false, 00:17:11.092 "zone_append": false, 00:17:11.092 "compare": false, 00:17:11.092 "compare_and_write": false, 00:17:11.092 "abort": false, 00:17:11.092 "seek_hole": true, 00:17:11.092 "seek_data": true, 00:17:11.092 "copy": false, 00:17:11.092 "nvme_iov_md": false 00:17:11.092 }, 00:17:11.092 "driver_specific": { 00:17:11.092 "lvol": { 00:17:11.092 "lvol_store_uuid": "81b23a17-2338-4b2b-8dad-d6eb6769e38a", 00:17:11.092 "base_bdev": "aio_bdev", 00:17:11.092 "thin_provision": false, 00:17:11.092 "num_allocated_clusters": 38, 00:17:11.092 "snapshot": false, 00:17:11.092 "clone": false, 00:17:11.092 "esnap_clone": false 00:17:11.092 } 00:17:11.092 } 00:17:11.092 } 00:17:11.092 ] 00:17:11.092 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:11.092 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:11.092 19:23:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:11.350 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:11.350 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:11.350 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:11.608 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:11.608 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50b410cd-f579-4c22-9670-1031cec865cf 00:17:11.608 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81b23a17-2338-4b2b-8dad-d6eb6769e38a 00:17:11.866 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.124 00:17:12.124 real 0m16.220s 00:17:12.124 user 0m41.910s 00:17:12.124 sys 0m3.848s 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:12.124 ************************************ 00:17:12.124 END TEST lvs_grow_dirty 00:17:12.124 ************************************ 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:12.124 nvmf_trace.0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.124 rmmod nvme_tcp 00:17:12.124 rmmod nvme_fabrics 00:17:12.124 rmmod nvme_keyring 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1595539 ']' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1595539 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1595539 ']' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1595539 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.124 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1595539 00:17:12.382 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.382 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.382 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1595539' 00:17:12.382 killing process with pid 1595539 00:17:12.382 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1595539 00:17:12.382 19:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1595539 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.382 19:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.919 19:23:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.919 00:17:14.919 real 0m39.337s 00:17:14.919 user 1m1.262s 00:17:14.919 sys 0m9.439s 00:17:14.919 19:23:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.919 19:23:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:14.919 ************************************ 00:17:14.919 END TEST nvmf_lvs_grow 00:17:14.919 ************************************ 00:17:14.919 19:23:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:14.919 19:23:25 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:14.919 19:23:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:14.919 19:23:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.919 19:23:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.919 ************************************ 00:17:14.919 START TEST nvmf_bdev_io_wait 00:17:14.919 ************************************ 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:14.919 * Looking for test storage... 00:17:14.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.919 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.920 19:23:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:20.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:20.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.285 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:20.286 Found net devices under 0000:86:00.0: cvl_0_0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:20.286 Found net devices under 0000:86:00.1: cvl_0_1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:17:20.286 00:17:20.286 --- 10.0.0.2 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:17:20.286 00:17:20.286 --- 10.0.0.1 ping statistics --- 00:17:20.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.286 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1599547 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1599547 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1599547 ']' 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 [2024-07-15 19:23:30.573568] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:20.286 [2024-07-15 19:23:30.573612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.286 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.286 [2024-07-15 19:23:30.604258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.286 [2024-07-15 19:23:30.632670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.286 [2024-07-15 19:23:30.673832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.286 [2024-07-15 19:23:30.673875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.286 [2024-07-15 19:23:30.673882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.286 [2024-07-15 19:23:30.673887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.286 [2024-07-15 19:23:30.673893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.286 [2024-07-15 19:23:30.673989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.286 [2024-07-15 19:23:30.674085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.286 [2024-07-15 19:23:30.674149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.286 [2024-07-15 19:23:30.674150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 [2024-07-15 19:23:30.827331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 Malloc0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:20.286 [2024-07-15 19:23:30.887680] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1599576 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1599578 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.286 { 00:17:20.286 "params": { 00:17:20.286 "name": "Nvme$subsystem", 00:17:20.286 "trtype": "$TEST_TRANSPORT", 00:17:20.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.286 "adrfam": "ipv4", 00:17:20.286 "trsvcid": "$NVMF_PORT", 00:17:20.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.286 "hdgst": ${hdgst:-false}, 00:17:20.286 "ddgst": ${ddgst:-false} 00:17:20.286 }, 00:17:20.286 "method": "bdev_nvme_attach_controller" 00:17:20.286 } 00:17:20.286 EOF 00:17:20.286 )") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1599580 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.286 { 00:17:20.286 "params": { 00:17:20.286 "name": "Nvme$subsystem", 00:17:20.286 "trtype": "$TEST_TRANSPORT", 00:17:20.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.286 "adrfam": "ipv4", 00:17:20.286 "trsvcid": "$NVMF_PORT", 00:17:20.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.286 "hdgst": ${hdgst:-false}, 00:17:20.286 "ddgst": ${ddgst:-false} 00:17:20.286 }, 00:17:20.286 "method": "bdev_nvme_attach_controller" 00:17:20.286 } 00:17:20.286 EOF 00:17:20.286 )") 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1599583 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:20.286 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.287 { 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme$subsystem", 00:17:20.287 "trtype": "$TEST_TRANSPORT", 00:17:20.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "$NVMF_PORT", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.287 "hdgst": ${hdgst:-false}, 00:17:20.287 "ddgst": ${ddgst:-false} 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 } 00:17:20.287 EOF 00:17:20.287 )") 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:20.287 { 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme$subsystem", 00:17:20.287 "trtype": "$TEST_TRANSPORT", 00:17:20.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "$NVMF_PORT", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:20.287 "hdgst": ${hdgst:-false}, 00:17:20.287 "ddgst": ${ddgst:-false} 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 } 00:17:20.287 EOF 00:17:20.287 )") 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1599576 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme1", 00:17:20.287 "trtype": "tcp", 00:17:20.287 "traddr": "10.0.0.2", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "4420", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.287 "hdgst": false, 00:17:20.287 "ddgst": false 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 }' 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme1", 00:17:20.287 "trtype": "tcp", 00:17:20.287 "traddr": "10.0.0.2", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "4420", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.287 "hdgst": false, 00:17:20.287 "ddgst": false 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 }' 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme1", 00:17:20.287 "trtype": "tcp", 00:17:20.287 "traddr": "10.0.0.2", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "4420", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.287 "hdgst": false, 00:17:20.287 "ddgst": false 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 }' 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:20.287 19:23:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:20.287 "params": { 00:17:20.287 "name": "Nvme1", 00:17:20.287 "trtype": "tcp", 00:17:20.287 "traddr": "10.0.0.2", 00:17:20.287 "adrfam": "ipv4", 00:17:20.287 "trsvcid": "4420", 00:17:20.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.287 "hdgst": false, 00:17:20.287 "ddgst": false 00:17:20.287 }, 00:17:20.287 "method": "bdev_nvme_attach_controller" 00:17:20.287 }' 00:17:20.287 [2024-07-15 19:23:30.938516] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:20.287 [2024-07-15 19:23:30.938553] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:20.287 [2024-07-15 19:23:30.938564] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:20.287 [2024-07-15 19:23:30.938592] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:20.287 [2024-07-15 19:23:30.938848] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:20.287 [2024-07-15 19:23:30.938883] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:20.287 [2024-07-15 19:23:30.942376] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:20.287 [2024-07-15 19:23:30.942427] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:20.287 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.287 [2024-07-15 19:23:31.082332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.287 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.287 [2024-07-15 19:23:31.122148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.546 [2024-07-15 19:23:31.149017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:20.546 [2024-07-15 19:23:31.177666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.546 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.546 [2024-07-15 19:23:31.220335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.546 [2024-07-15 19:23:31.248024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:20.546 [2024-07-15 19:23:31.272658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.546 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.546 [2024-07-15 19:23:31.314138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.546 [2024-07-15 19:23:31.330799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:20.546 [2024-07-15 19:23:31.344034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:20.546 [2024-07-15 19:23:31.356626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.546 [2024-07-15 19:23:31.383655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:20.805 Running I/O for 1 seconds... 00:17:20.805 Running I/O for 1 seconds... 00:17:20.805 Running I/O for 1 seconds... 00:17:20.805 Running I/O for 1 seconds... 00:17:21.741 00:17:21.741 Latency(us) 00:17:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.741 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:21.741 Nvme1n1 : 1.01 8538.25 33.35 0.00 0.00 14877.10 5841.25 23365.01 00:17:21.741 =================================================================================================================== 00:17:21.741 Total : 8538.25 33.35 0.00 0.00 14877.10 5841.25 23365.01 00:17:21.741 00:17:21.741 Latency(us) 00:17:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.741 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:21.741 Nvme1n1 : 1.00 245555.84 959.20 0.00 0.00 519.72 211.92 633.99 00:17:21.741 =================================================================================================================== 00:17:21.741 Total : 245555.84 959.20 0.00 0.00 519.72 211.92 633.99 00:17:21.999 00:17:21.999 Latency(us) 00:17:21.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.999 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:21.999 Nvme1n1 : 1.01 7839.82 30.62 0.00 0.00 16273.30 6211.67 27468.13 00:17:21.999 =================================================================================================================== 00:17:21.999 Total : 7839.82 30.62 0.00 0.00 16273.30 6211.67 27468.13 00:17:21.999 00:17:21.999 Latency(us) 00:17:21.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.999 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:21.999 Nvme1n1 : 1.00 11118.31 43.43 0.00 0.00 11481.14 4843.97 22795.13 00:17:21.999 =================================================================================================================== 00:17:21.999 Total : 11118.31 43.43 0.00 0.00 11481.14 4843.97 22795.13 00:17:21.999 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1599578 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1599580 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1599583 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.258 19:23:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.258 rmmod nvme_tcp 00:17:22.258 rmmod nvme_fabrics 00:17:22.258 rmmod nvme_keyring 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1599547 ']' 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1599547 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1599547 ']' 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1599547 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1599547 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1599547' 00:17:22.258 killing process with pid 1599547 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1599547 00:17:22.258 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1599547 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.516 19:23:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.051 19:23:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.051 00:17:25.051 real 0m9.985s 00:17:25.051 user 0m16.656s 00:17:25.051 sys 0m5.484s 00:17:25.051 19:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.051 19:23:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:25.051 ************************************ 00:17:25.051 END TEST nvmf_bdev_io_wait 00:17:25.051 ************************************ 00:17:25.051 19:23:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.051 19:23:35 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:25.051 19:23:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.051 19:23:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.051 19:23:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.051 ************************************ 00:17:25.051 START TEST nvmf_queue_depth 00:17:25.051 ************************************ 00:17:25.051 19:23:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:25.051 * Looking for test storage... 00:17:25.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.052 19:23:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.324 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.324 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:17:30.324 00:17:30.324 --- 10.0.0.2 ping statistics --- 00:17:30.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.324 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:30.324 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:17:30.325 00:17:30.325 --- 10.0.0.1 ping statistics --- 00:17:30.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.325 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1603408 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1603408 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1603408 ']' 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.325 19:23:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.325 [2024-07-15 19:23:40.982388] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:30.325 [2024-07-15 19:23:40.982432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.325 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.325 [2024-07-15 19:23:41.013000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.325 [2024-07-15 19:23:41.041191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.325 [2024-07-15 19:23:41.081838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.325 [2024-07-15 19:23:41.081876] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.325 [2024-07-15 19:23:41.081884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.325 [2024-07-15 19:23:41.081890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.325 [2024-07-15 19:23:41.081895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.325 [2024-07-15 19:23:41.081917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.325 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.325 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:30.325 19:23:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.325 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.325 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 [2024-07-15 19:23:41.210520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 Malloc0 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 [2024-07-15 19:23:41.269856] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1603587 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1603587 /var/tmp/bdevperf.sock 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1603587 ']' 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.584 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.584 [2024-07-15 19:23:41.317763] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:30.584 [2024-07-15 19:23:41.317804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603587 ] 00:17:30.584 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.584 [2024-07-15 19:23:41.343680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.584 [2024-07-15 19:23:41.371452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.584 [2024-07-15 19:23:41.410987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.843 NVMe0n1 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.843 19:23:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.101 Running I/O for 10 seconds... 00:17:41.080 00:17:41.080 Latency(us) 00:17:41.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.080 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:41.080 Verification LBA range: start 0x0 length 0x4000 00:17:41.080 NVMe0n1 : 10.07 12270.24 47.93 0.00 0.00 83146.90 19261.89 58355.53 00:17:41.080 =================================================================================================================== 00:17:41.080 Total : 12270.24 47.93 0.00 0.00 83146.90 19261.89 58355.53 00:17:41.080 0 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1603587 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1603587 ']' 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1603587 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603587 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603587' 00:17:41.080 killing process with pid 1603587 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1603587 00:17:41.080 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.080 00:17:41.080 Latency(us) 00:17:41.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.080 =================================================================================================================== 00:17:41.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.080 19:23:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1603587 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.339 rmmod nvme_tcp 00:17:41.339 rmmod nvme_fabrics 00:17:41.339 rmmod nvme_keyring 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1603408 ']' 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1603408 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1603408 ']' 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1603408 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603408 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603408' 00:17:41.339 killing process with pid 1603408 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1603408 00:17:41.339 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1603408 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.598 19:23:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.147 19:23:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.147 00:17:44.147 real 0m19.052s 00:17:44.147 user 0m22.796s 00:17:44.147 sys 0m5.511s 00:17:44.147 19:23:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.147 19:23:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:44.147 ************************************ 00:17:44.147 END TEST nvmf_queue_depth 00:17:44.147 ************************************ 00:17:44.147 19:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:44.147 19:23:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:44.147 19:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:44.147 19:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.147 19:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.147 ************************************ 00:17:44.147 START TEST nvmf_target_multipath 00:17:44.147 ************************************ 00:17:44.147 19:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:44.147 * Looking for test storage... 00:17:44.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.147 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.147 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:44.147 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.147 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.148 19:23:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.488 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:49.489 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:49.489 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:49.489 Found net devices under 0000:86:00.0: cvl_0_0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:49.489 Found net devices under 0000:86:00.1: cvl_0_1 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:17:49.489 00:17:49.489 --- 10.0.0.2 ping statistics --- 00:17:49.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.489 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:17:49.489 00:17:49.489 --- 10.0.0.1 ping statistics --- 00:17:49.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.489 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:49.489 only one NIC for nvmf test 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.489 rmmod nvme_tcp 00:17:49.489 rmmod nvme_fabrics 00:17:49.489 rmmod nvme_keyring 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.489 19:23:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.395 00:17:51.395 real 0m7.370s 00:17:51.395 user 0m1.407s 00:17:51.395 sys 0m3.939s 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.395 19:24:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 ************************************ 00:17:51.395 END TEST nvmf_target_multipath 00:17:51.395 ************************************ 00:17:51.395 19:24:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.395 19:24:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:51.395 19:24:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.395 19:24:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.395 19:24:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 ************************************ 00:17:51.395 START TEST nvmf_zcopy 00:17:51.395 ************************************ 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:51.395 * Looking for test storage... 00:17:51.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.395 19:24:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.395 19:24:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:56.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:56.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:56.666 Found net devices under 0000:86:00.0: cvl_0_0 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:56.666 Found net devices under 0000:86:00.1: cvl_0_1 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.666 19:24:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:56.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:56.666 00:17:56.666 --- 10.0.0.2 ping statistics --- 00:17:56.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.666 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:17:56.666 00:17:56.666 --- 10.0.0.1 ping statistics --- 00:17:56.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.666 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:56.666 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1612128 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1612128 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1612128 ']' 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 [2024-07-15 19:24:07.201874] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:56.667 [2024-07-15 19:24:07.201917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.667 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.667 [2024-07-15 19:24:07.230753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:56.667 [2024-07-15 19:24:07.259912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.667 [2024-07-15 19:24:07.299465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.667 [2024-07-15 19:24:07.299503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.667 [2024-07-15 19:24:07.299510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.667 [2024-07-15 19:24:07.299517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.667 [2024-07-15 19:24:07.299522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.667 [2024-07-15 19:24:07.299541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 [2024-07-15 19:24:07.426739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 [2024-07-15 19:24:07.446904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 malloc0 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:56.667 { 00:17:56.667 "params": { 00:17:56.667 "name": "Nvme$subsystem", 00:17:56.667 "trtype": "$TEST_TRANSPORT", 00:17:56.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.667 "adrfam": "ipv4", 00:17:56.667 "trsvcid": "$NVMF_PORT", 00:17:56.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.667 "hdgst": ${hdgst:-false}, 00:17:56.667 "ddgst": ${ddgst:-false} 00:17:56.667 }, 00:17:56.667 "method": "bdev_nvme_attach_controller" 00:17:56.667 } 00:17:56.667 EOF 00:17:56.667 )") 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:56.667 19:24:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:56.667 "params": { 00:17:56.667 "name": "Nvme1", 00:17:56.667 "trtype": "tcp", 00:17:56.667 "traddr": "10.0.0.2", 00:17:56.667 "adrfam": "ipv4", 00:17:56.667 "trsvcid": "4420", 00:17:56.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.667 "hdgst": false, 00:17:56.667 "ddgst": false 00:17:56.667 }, 00:17:56.667 "method": "bdev_nvme_attach_controller" 00:17:56.667 }' 00:17:56.667 [2024-07-15 19:24:07.511508] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:17:56.667 [2024-07-15 19:24:07.511548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612155 ] 00:17:56.926 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.926 [2024-07-15 19:24:07.537880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:56.926 [2024-07-15 19:24:07.565382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.926 [2024-07-15 19:24:07.605431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.184 Running I/O for 10 seconds... 00:18:07.162 00:18:07.162 Latency(us) 00:18:07.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.162 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:07.162 Verification LBA range: start 0x0 length 0x1000 00:18:07.162 Nvme1n1 : 10.01 8642.62 67.52 0.00 0.00 14767.59 2122.80 27354.16 00:18:07.162 =================================================================================================================== 00:18:07.162 Total : 8642.62 67.52 0.00 0.00 14767.59 2122.80 27354.16 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1614365 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:07.162 { 00:18:07.162 "params": { 00:18:07.162 "name": "Nvme$subsystem", 00:18:07.162 "trtype": "$TEST_TRANSPORT", 00:18:07.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.162 "adrfam": "ipv4", 00:18:07.162 "trsvcid": "$NVMF_PORT", 00:18:07.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.162 "hdgst": ${hdgst:-false}, 00:18:07.162 "ddgst": ${ddgst:-false} 00:18:07.162 }, 00:18:07.162 "method": "bdev_nvme_attach_controller" 00:18:07.162 } 00:18:07.162 EOF 00:18:07.162 )") 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:07.162 [2024-07-15 19:24:18.010159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.162 [2024-07-15 19:24:18.010190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:07.162 19:24:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:07.162 "params": { 00:18:07.162 "name": "Nvme1", 00:18:07.162 "trtype": "tcp", 00:18:07.162 "traddr": "10.0.0.2", 00:18:07.162 "adrfam": "ipv4", 00:18:07.162 "trsvcid": "4420", 00:18:07.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.162 "hdgst": false, 00:18:07.162 "ddgst": false 00:18:07.162 }, 00:18:07.162 "method": "bdev_nvme_attach_controller" 00:18:07.162 }' 00:18:07.420 [2024-07-15 19:24:18.022160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.420 [2024-07-15 19:24:18.022175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.420 [2024-07-15 19:24:18.030178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.420 [2024-07-15 19:24:18.030188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.032479] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:18:07.421 [2024-07-15 19:24:18.032521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614365 ] 00:18:07.421 [2024-07-15 19:24:18.042211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.042222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.054249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.054259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.421 [2024-07-15 19:24:18.058537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:07.421 [2024-07-15 19:24:18.066280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.066295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.078307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.078317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.082321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.421 [2024-07-15 19:24:18.090344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.090355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.102377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.102401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.114407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.114424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.122610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.421 [2024-07-15 19:24:18.126437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.126448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.138476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.138495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.150512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.150527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.162541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.162552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.174568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.174579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.186603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.186614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.198642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.198657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.210683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.210700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.222705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.222718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.234747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.234765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.246770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.246782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.258800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.258810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.421 [2024-07-15 19:24:18.270834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.421 [2024-07-15 19:24:18.270843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.282876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.282898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.294902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.294912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.306937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.306948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.318969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.318979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.331005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.331018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.343034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.343044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.355067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.355076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.367103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.367115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.379146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.379163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 Running I/O for 5 seconds... 00:18:07.679 [2024-07-15 19:24:18.391169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.391181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.403033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.403053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.417070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.417089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.424762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.424781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.438115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.438133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.446970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.446989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.455991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.456009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.465211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.465239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.474500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.474519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.488660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.488679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.501987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.502013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.515662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.515682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.679 [2024-07-15 19:24:18.524407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.679 [2024-07-15 19:24:18.524425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.538681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.538701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.552596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.552617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.566942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.566963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.577608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.577627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.586779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.586797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.595599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.595617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.610053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.610071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.622984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.623002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.636928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.636946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.645817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.645835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.660368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.660387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.673987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.674006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.688678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.688696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.699586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.699605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.708975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.708994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.717564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.717583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.726933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.726951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.735580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.735598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.750412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.750430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.765729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.765747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.774708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.774726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.938 [2024-07-15 19:24:18.789310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.938 [2024-07-15 19:24:18.789329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.803465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.803485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.814055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.814073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.822606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.822624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.837339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.837358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.853105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.853124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.867326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.867345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.196 [2024-07-15 19:24:18.880872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.196 [2024-07-15 19:24:18.880891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.889915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.889933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.899259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.899279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.914105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.914125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.925063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.925084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.939267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.939287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.948077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.948096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.962481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.962500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.975792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.975812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:18.989577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:18.989597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:19.003637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:19.003656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:19.012742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:19.012761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:19.027178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:19.027198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.197 [2024-07-15 19:24:19.041195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.197 [2024-07-15 19:24:19.041215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.051768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.051789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.060928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.060947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.069454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.069472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.083802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.083821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.098086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.098104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.105759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.105782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.119784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.119803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.128895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.128913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.137763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.137782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.152714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.152733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.168299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.168319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.177308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.177327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.192017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.192036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.207907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.207926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.221816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.221835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.230925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.230943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.239647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.239664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.254385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.254405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.262042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.262061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.275916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.275935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.284778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.284797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.455 [2024-07-15 19:24:19.299820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.455 [2024-07-15 19:24:19.299839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.311404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.311425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.325528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.325548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.339691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.339710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.350961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.350979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.360061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.360080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.374789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.374807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.385423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.385452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.400154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.400173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.407805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.407827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.421873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.421892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.713 [2024-07-15 19:24:19.432513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.713 [2024-07-15 19:24:19.432531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.441076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.441094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.455782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.455799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.471567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.471585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.485751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.485769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.500221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.500244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.515541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.515559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.530133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.530152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.544152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.544170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.714 [2024-07-15 19:24:19.555033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.714 [2024-07-15 19:24:19.555051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.569691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.569712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.580297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.580318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.594950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.594969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.605877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.605896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.614490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.614509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.623613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.623631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.632213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.632237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.646646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.646669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.660537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.660555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.675027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.675046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.690508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.690526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.704689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.704708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.715302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.715323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.724111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.724130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.733208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.733235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.742605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.742624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.751566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.751585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.765928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.765947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.779715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.779734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.788513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.788531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.797560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.797578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.806344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.806363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.971 [2024-07-15 19:24:19.821211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.971 [2024-07-15 19:24:19.821235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.832164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.832184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.840724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.840742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.850016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.850034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.858740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.858762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.873799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.873818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.888952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.888970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.903261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.903280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.917473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.917491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.931170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.931188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.945080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.945098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.954033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.954051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.962435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.962453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.977212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.977235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:19.992421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:19.992451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.006739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.006758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.020723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.020742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.034882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.034901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.043853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.043871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.052558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.052577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.067099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.067118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.229 [2024-07-15 19:24:20.075922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.229 [2024-07-15 19:24:20.075941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.084857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.084877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.099390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.099413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.108362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.108380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.123339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.123357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.138622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.138640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.153005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.487 [2024-07-15 19:24:20.153024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.487 [2024-07-15 19:24:20.163782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.163800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.178166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.178184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.192676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.192694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.203556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.203574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.212291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.212309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.221519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.221536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.230137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.230156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.244648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.244667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.258580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.258599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.267497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.267516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.276150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.276168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.285293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.285312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.299539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.299559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.313323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.313344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.322204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.322238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.331398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.331418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.488 [2024-07-15 19:24:20.341177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.488 [2024-07-15 19:24:20.341197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.355732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.355752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.369392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.369412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.378388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.378408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.392891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.392910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.402102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.402121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.416520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.416539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.430246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.430265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.439075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.439094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.448790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.448809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.457909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.457926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.472895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.472913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.487926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.487945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.502353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.502383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.513368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.513387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.522540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.522560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.537041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.537060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.551285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.551305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.561875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.561894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.571132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.571151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.585824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.585843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.746 [2024-07-15 19:24:20.596781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.746 [2024-07-15 19:24:20.596801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.605631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.605651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.620603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.620622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.637067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.637086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.652838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.652858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.666987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.667006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.680925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.680945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.689837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.689857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.704405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.704433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.718096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.718115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.732585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.732604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.743250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.743268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.752360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.752378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.760855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.760873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.005 [2024-07-15 19:24:20.775608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.005 [2024-07-15 19:24:20.775627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.790825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.790844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.804856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.804875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.818915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.818935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.829614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.829633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.844353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.844373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.006 [2024-07-15 19:24:20.854986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.006 [2024-07-15 19:24:20.855005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.869681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.869701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.880130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.880148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.889264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.889282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.898632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.898650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.913595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.913613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.928838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.928857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.937663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.937680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.952432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.952452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.963526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.963546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.977943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.977962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:20.991894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:20.991912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:21.003157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:21.003175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:21.017662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:21.017681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:21.030879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:21.030898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:21.045017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.264 [2024-07-15 19:24:21.045035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.264 [2024-07-15 19:24:21.053855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.053872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.265 [2024-07-15 19:24:21.068331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.068350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.265 [2024-07-15 19:24:21.078550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.078569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.265 [2024-07-15 19:24:21.087684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.087702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.265 [2024-07-15 19:24:21.102343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.102363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.265 [2024-07-15 19:24:21.113727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.265 [2024-07-15 19:24:21.113746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.122685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.122704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.132078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.132096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.140652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.140670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.155569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.155588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.171088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.171107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.179782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.179800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.189080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.189098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.204260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.204279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.219364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.219382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.228463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.228481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.237027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.237049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.251241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.251275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.523 [2024-07-15 19:24:21.264810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.523 [2024-07-15 19:24:21.264828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.278740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.278759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.289420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.289438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.298559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.298577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.307213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.307236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.321695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.321714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.335024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.335044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.348834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.348852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.362503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.362524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.524 [2024-07-15 19:24:21.371579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.524 [2024-07-15 19:24:21.371597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.386049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.386068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.400094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.400113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.414070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.414089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.422988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.423006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.431510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.431529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.446141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.446159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.457404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.457422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.466384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.466407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.475628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.475646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.484931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.484949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.494144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.494162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.508854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.508872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.524256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.524274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.533237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.533255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.541815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.541833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.550992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.551010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.565678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.565698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.573163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.573182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.581944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.581962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.590500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.590518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.605040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.605059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.618362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.618383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.783 [2024-07-15 19:24:21.632526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.783 [2024-07-15 19:24:21.632545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.646200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.646219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.660120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.660139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.673916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.673934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.684552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.684575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.698750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.698768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.707675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.707695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.721915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.721933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.730882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.730902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.745243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.745264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.758857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.758877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.772816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.772835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.781973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.781992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.796088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.796108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.809562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.809582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.823606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.823625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.832632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.832651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.841531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.841555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.850230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.850248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.865206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.865232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.875515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.875534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.042 [2024-07-15 19:24:21.889913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.042 [2024-07-15 19:24:21.889931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.901480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.901499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.910605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.910631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.925233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.925253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.932826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.932846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.946552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.946570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.955389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.955408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.964645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.964664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.979039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.979058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:21.993158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:21.993176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.004984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.005003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.018819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.018838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.032620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.032639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.046680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.046699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.055762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.055781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.064532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.064551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.074202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.074221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.088934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.088952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.104636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.104655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.118699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.118718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.127474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.127492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.141334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.141356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.302 [2024-07-15 19:24:22.155021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.302 [2024-07-15 19:24:22.155040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.169503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.169520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.185275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.185292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.194283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.194301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.208832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.208850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.222086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.222104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.236114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.236131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.245028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.245045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.253754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.253773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.268388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.268407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.277321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.277339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.291770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.291789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.305381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.305400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.319244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.319263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.333189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.333208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.342103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.342121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.356398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.356416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.365623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.365640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.379848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.379866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.393771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.393790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.562 [2024-07-15 19:24:22.408202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.562 [2024-07-15 19:24:22.408221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.419176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.419197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.427973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.427992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.436755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.436773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.445780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.445799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.454564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.454582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.469177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.469195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.480295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.480313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.488611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.488630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.503059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.503078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.517110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.517128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.528370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.528388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.542241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.542276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.551084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.551103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.559982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.560000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.574798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.574816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.589694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.589713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.603978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.603997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.611786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.611804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.625520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.625540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.634260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.634279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.649338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.649357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.660252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.660271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.821 [2024-07-15 19:24:22.674970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.821 [2024-07-15 19:24:22.674989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.682799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.682819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.696250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.696269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.710160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.710179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.723788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.723807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.738272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.738290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.754501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.754519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.765190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.765209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.779704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.779722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.790348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.790367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.799105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.799124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.807761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.807779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.816182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.816200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.830919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.830938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.842240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.842259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.856535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.856554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.869861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.869880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.878632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.878651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.893308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.893327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.907177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.907196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.915871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.915889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.924554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.924572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.080 [2024-07-15 19:24:22.933061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.080 [2024-07-15 19:24:22.933079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:22.948178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:22.948197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:22.963187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:22.963206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:22.977756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:22.977775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:22.988058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:22.988075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:22.997093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:22.997112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.011711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.011730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.020532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.020550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.034978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.034996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.043639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.043661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.052097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.052115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.066602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.066621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.079379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.079398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.088286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.088304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.096864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.096883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.105749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.105768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.120035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.120054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.133812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.133832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.148582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.148601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.158691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.158711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.167586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.167606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.339 [2024-07-15 19:24:23.182105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.339 [2024-07-15 19:24:23.182124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.195387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.195407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.204613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.204631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.219522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.219540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.234903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.234922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.249557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.249576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.265275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.265294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.279927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.279952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.290276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.290295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.299658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.299677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.314474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.314493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.325318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.325337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.334059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.334077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.342812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.342832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.357422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.357442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.368081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.368100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.382468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.382487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.396546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.396567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.406421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.406442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 00:18:12.597 Latency(us) 00:18:12.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.597 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:12.597 Nvme1n1 : 5.01 16660.40 130.16 0.00 0.00 7675.31 2379.24 17324.30 00:18:12.597 =================================================================================================================== 00:18:12.597 Total : 16660.40 130.16 0.00 0.00 7675.31 2379.24 17324.30 00:18:12.597 [2024-07-15 19:24:23.416705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.416724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.436781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.436810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.597 [2024-07-15 19:24:23.444777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.597 [2024-07-15 19:24:23.444790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.456814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.456829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.468845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.468868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.480873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.480888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.492904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.492918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.504939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.504956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.516968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.516980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.528998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.529009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.541028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.541039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.553060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.553070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.565097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.565108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.577125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.577135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 [2024-07-15 19:24:23.589158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.855 [2024-07-15 19:24:23.589167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1614365) - No such process 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1614365 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 delay0 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.855 19:24:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:12.855 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.855 [2024-07-15 19:24:23.704170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:19.491 Initializing NVMe Controllers 00:18:19.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:19.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:19.491 Initialization complete. Launching workers. 00:18:19.491 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1262 00:18:19.491 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1545, failed to submit 37 00:18:19.491 success 1371, unsuccess 174, failed 0 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.491 rmmod nvme_tcp 00:18:19.491 rmmod nvme_fabrics 00:18:19.491 rmmod nvme_keyring 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1612128 ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1612128 ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1612128' 00:18:19.491 killing process with pid 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1612128 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.491 19:24:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.031 19:24:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.031 00:18:22.031 real 0m30.480s 00:18:22.031 user 0m41.995s 00:18:22.031 sys 0m10.169s 00:18:22.031 19:24:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.031 19:24:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.031 ************************************ 00:18:22.031 END TEST nvmf_zcopy 00:18:22.031 ************************************ 00:18:22.031 19:24:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.031 19:24:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:22.031 19:24:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.031 19:24:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.031 19:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.031 ************************************ 00:18:22.031 START TEST nvmf_nmic 00:18:22.031 ************************************ 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:22.031 * Looking for test storage... 00:18:22.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.031 19:24:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.301 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:27.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:27.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:27.302 Found net devices under 0000:86:00.0: cvl_0_0 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:27.302 Found net devices under 0000:86:00.1: cvl_0_1 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.302 19:24:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:18:27.302 00:18:27.302 --- 10.0.0.2 ping statistics --- 00:18:27.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.302 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:18:27.302 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:18:27.561 00:18:27.561 --- 10.0.0.1 ping statistics --- 00:18:27.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.561 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1619718 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1619718 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1619718 ']' 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.561 19:24:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:27.561 [2024-07-15 19:24:38.241918] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:18:27.561 [2024-07-15 19:24:38.241959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.561 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.561 [2024-07-15 19:24:38.274978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:27.561 [2024-07-15 19:24:38.303054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.561 [2024-07-15 19:24:38.345821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.561 [2024-07-15 19:24:38.345858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.561 [2024-07-15 19:24:38.345865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.561 [2024-07-15 19:24:38.345872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.561 [2024-07-15 19:24:38.345876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.561 [2024-07-15 19:24:38.345926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.561 [2024-07-15 19:24:38.345945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.561 [2024-07-15 19:24:38.345970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.561 [2024-07-15 19:24:38.345971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 [2024-07-15 19:24:39.092140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 Malloc0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 [2024-07-15 19:24:39.144013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:28.497 test case1: single bdev can't be used in multiple subsystems 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.497 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.497 [2024-07-15 19:24:39.167906] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:28.497 [2024-07-15 19:24:39.167925] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:28.497 [2024-07-15 19:24:39.167932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.497 request: 00:18:28.497 { 00:18:28.497 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:28.497 "namespace": { 00:18:28.497 "bdev_name": "Malloc0", 00:18:28.497 "no_auto_visible": false 00:18:28.497 }, 00:18:28.497 "method": "nvmf_subsystem_add_ns", 00:18:28.497 "req_id": 1 00:18:28.497 } 00:18:28.497 Got JSON-RPC error response 00:18:28.497 response: 00:18:28.497 { 00:18:28.497 "code": -32602, 00:18:28.497 "message": "Invalid parameters" 00:18:28.497 } 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:28.498 Adding namespace failed - expected result. 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:28.498 test case2: host connect to nvmf target in multiple paths 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:28.498 [2024-07-15 19:24:39.180037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.498 19:24:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.876 19:24:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:30.812 19:24:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.812 19:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:30.812 19:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.812 19:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:30.812 19:24:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.713 19:24:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:32.714 19:24:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:32.714 [global] 00:18:32.714 thread=1 00:18:32.714 invalidate=1 00:18:32.714 rw=write 00:18:32.714 time_based=1 00:18:32.714 runtime=1 00:18:32.714 ioengine=libaio 00:18:32.714 direct=1 00:18:32.714 bs=4096 00:18:32.714 iodepth=1 00:18:32.714 norandommap=0 00:18:32.714 numjobs=1 00:18:32.714 00:18:32.714 verify_dump=1 00:18:32.714 verify_backlog=512 00:18:32.714 verify_state_save=0 00:18:32.714 do_verify=1 00:18:32.714 verify=crc32c-intel 00:18:32.714 [job0] 00:18:32.714 filename=/dev/nvme0n1 00:18:32.714 Could not set queue depth (nvme0n1) 00:18:32.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.972 fio-3.35 00:18:32.972 Starting 1 thread 00:18:34.349 00:18:34.349 job0: (groupid=0, jobs=1): err= 0: pid=1620791: Mon Jul 15 19:24:44 2024 00:18:34.349 read: IOPS=1641, BW=6565KiB/s (6723kB/s)(6572KiB/1001msec) 00:18:34.349 slat (nsec): min=6890, max=44153, avg=7805.06, stdev=1757.11 00:18:34.349 clat (usec): min=292, max=1766, avg=344.19, stdev=39.01 00:18:34.349 lat (usec): min=300, max=1773, avg=351.99, stdev=39.02 00:18:34.349 clat percentiles (usec): 00:18:34.349 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:18:34.349 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 347], 00:18:34.349 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 359], 00:18:34.349 | 99.00th=[ 400], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 1762], 00:18:34.349 | 99.99th=[ 1762] 00:18:34.349 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:34.349 slat (nsec): min=10041, max=42642, avg=11303.40, stdev=2005.21 00:18:34.349 clat (usec): min=156, max=392, avg=189.42, stdev=10.82 00:18:34.349 lat (usec): min=171, max=432, avg=200.72, stdev=11.02 00:18:34.349 clat percentiles (usec): 00:18:34.349 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:18:34.349 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 188], 60.00th=[ 190], 00:18:34.349 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 202], 00:18:34.349 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 302], 99.95th=[ 334], 00:18:34.349 | 99.99th=[ 392] 00:18:34.349 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:34.349 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:34.349 lat (usec) : 250=55.38%, 500=44.59% 00:18:34.349 lat (msec) : 2=0.03% 00:18:34.349 cpu : usr=3.20%, sys=5.60%, ctx=3692, majf=0, minf=2 00:18:34.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.349 issued rwts: total=1643,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.349 00:18:34.349 Run status group 0 (all jobs): 00:18:34.349 READ: bw=6565KiB/s (6723kB/s), 6565KiB/s-6565KiB/s (6723kB/s-6723kB/s), io=6572KiB (6730kB), run=1001-1001msec 00:18:34.349 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:18:34.349 00:18:34.349 Disk stats (read/write): 00:18:34.349 nvme0n1: ios=1586/1732, merge=0/0, ticks=527/306, in_queue=833, util=91.18% 00:18:34.349 19:24:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.349 rmmod nvme_tcp 00:18:34.349 rmmod nvme_fabrics 00:18:34.349 rmmod nvme_keyring 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1619718 ']' 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1619718 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1619718 ']' 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1619718 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1619718 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1619718' 00:18:34.349 killing process with pid 1619718 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1619718 00:18:34.349 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1619718 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.608 19:24:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.144 19:24:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.144 00:18:37.144 real 0m14.994s 00:18:37.144 user 0m35.106s 00:18:37.144 sys 0m5.068s 00:18:37.144 19:24:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.144 19:24:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:37.144 ************************************ 00:18:37.144 END TEST nvmf_nmic 00:18:37.144 ************************************ 00:18:37.144 19:24:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:37.144 19:24:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:37.144 19:24:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:37.144 19:24:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.144 19:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.144 ************************************ 00:18:37.144 START TEST nvmf_fio_target 00:18:37.144 ************************************ 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:37.144 * Looking for test storage... 00:18:37.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.144 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.145 19:24:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:42.438 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:42.438 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.438 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:42.438 Found net devices under 0000:86:00.0: cvl_0_0 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:42.439 Found net devices under 0000:86:00.1: cvl_0_1 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:18:42.439 00:18:42.439 --- 10.0.0.2 ping statistics --- 00:18:42.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.439 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:18:42.439 00:18:42.439 --- 10.0.0.1 ping statistics --- 00:18:42.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.439 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1624444 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1624444 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1624444 ']' 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.439 19:24:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.439 [2024-07-15 19:24:52.921864] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:18:42.439 [2024-07-15 19:24:52.921909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.439 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.439 [2024-07-15 19:24:52.951097] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:42.439 [2024-07-15 19:24:52.978048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.439 [2024-07-15 19:24:53.019852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.439 [2024-07-15 19:24:53.019889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.439 [2024-07-15 19:24:53.019896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.439 [2024-07-15 19:24:53.019902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.439 [2024-07-15 19:24:53.019908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.439 [2024-07-15 19:24:53.019944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.439 [2024-07-15 19:24:53.020040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.439 [2024-07-15 19:24:53.020107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.439 [2024-07-15 19:24:53.020109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.439 19:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.440 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:42.757 [2024-07-15 19:24:53.309632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.757 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:42.757 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:42.757 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.015 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:43.015 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.273 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:43.273 19:24:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.273 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:43.273 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:43.531 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.789 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:43.789 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.048 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:44.048 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.048 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:44.048 19:24:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:44.306 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:44.565 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.565 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.565 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.565 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:44.854 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.113 [2024-07-15 19:24:55.727599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.113 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:45.113 19:24:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:45.371 19:24:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:46.749 19:24:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:48.665 19:24:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:48.665 [global] 00:18:48.665 thread=1 00:18:48.665 invalidate=1 00:18:48.665 rw=write 00:18:48.665 time_based=1 00:18:48.665 runtime=1 00:18:48.665 ioengine=libaio 00:18:48.665 direct=1 00:18:48.665 bs=4096 00:18:48.665 iodepth=1 00:18:48.665 norandommap=0 00:18:48.665 numjobs=1 00:18:48.665 00:18:48.665 verify_dump=1 00:18:48.665 verify_backlog=512 00:18:48.665 verify_state_save=0 00:18:48.665 do_verify=1 00:18:48.665 verify=crc32c-intel 00:18:48.665 [job0] 00:18:48.665 filename=/dev/nvme0n1 00:18:48.665 [job1] 00:18:48.665 filename=/dev/nvme0n2 00:18:48.665 [job2] 00:18:48.665 filename=/dev/nvme0n3 00:18:48.665 [job3] 00:18:48.665 filename=/dev/nvme0n4 00:18:48.665 Could not set queue depth (nvme0n1) 00:18:48.665 Could not set queue depth (nvme0n2) 00:18:48.665 Could not set queue depth (nvme0n3) 00:18:48.665 Could not set queue depth (nvme0n4) 00:18:48.924 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.924 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.924 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.924 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.924 fio-3.35 00:18:48.924 Starting 4 threads 00:18:50.294 00:18:50.294 job0: (groupid=0, jobs=1): err= 0: pid=1625667: Mon Jul 15 19:25:00 2024 00:18:50.294 read: IOPS=523, BW=2094KiB/s (2144kB/s)(2104KiB/1005msec) 00:18:50.294 slat (nsec): min=7069, max=22898, avg=8490.20, stdev=2544.02 00:18:50.294 clat (usec): min=299, max=41824, avg=1426.03, stdev=6553.14 00:18:50.294 lat (usec): min=307, max=41846, avg=1434.52, stdev=6555.23 00:18:50.294 clat percentiles (usec): 00:18:50.294 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:18:50.294 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:18:50.294 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 408], 00:18:50.294 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:50.294 | 99.99th=[41681] 00:18:50.294 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:18:50.294 slat (nsec): min=6428, max=50436, avg=12365.94, stdev=2953.46 00:18:50.294 clat (usec): min=159, max=1787, avg=224.05, stdev=115.69 00:18:50.294 lat (usec): min=172, max=1800, avg=236.42, stdev=116.24 00:18:50.294 clat percentiles (usec): 00:18:50.294 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:18:50.294 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:18:50.294 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 273], 00:18:50.294 | 99.00th=[ 424], 99.50th=[ 1303], 99.90th=[ 1729], 99.95th=[ 1795], 00:18:50.294 | 99.99th=[ 1795] 00:18:50.294 bw ( KiB/s): min= 8192, max= 8192, per=69.33%, avg=8192.00, stdev= 0.00, samples=1 00:18:50.294 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:50.294 lat (usec) : 250=58.90%, 500=39.68% 00:18:50.294 lat (msec) : 2=0.52%, 50=0.90% 00:18:50.294 cpu : usr=1.29%, sys=2.59%, ctx=1551, majf=0, minf=1 00:18:50.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.294 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.294 job1: (groupid=0, jobs=1): err= 0: pid=1625668: Mon Jul 15 19:25:00 2024 00:18:50.294 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:18:50.294 slat (nsec): min=18984, max=24055, avg=22346.82, stdev=1121.23 00:18:50.294 clat (usec): min=40871, max=41911, avg=41032.33, stdev=210.37 00:18:50.294 lat (usec): min=40893, max=41933, avg=41054.67, stdev=210.16 00:18:50.294 clat percentiles (usec): 00:18:50.294 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:50.294 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:50.294 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:50.294 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:50.294 | 99.99th=[41681] 00:18:50.294 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:18:50.294 slat (nsec): min=4713, max=35126, avg=12441.38, stdev=2289.88 00:18:50.294 clat (usec): min=176, max=856, avg=219.69, stdev=50.99 00:18:50.294 lat (usec): min=188, max=867, avg=232.13, stdev=50.98 00:18:50.294 clat percentiles (usec): 00:18:50.294 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:18:50.294 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:18:50.294 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 273], 00:18:50.294 | 99.00th=[ 379], 99.50th=[ 529], 99.90th=[ 857], 99.95th=[ 857], 00:18:50.294 | 99.99th=[ 857] 00:18:50.294 bw ( KiB/s): min= 4087, max= 4087, per=34.59%, avg=4087.00, stdev= 0.00, samples=1 00:18:50.294 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:50.294 lat (usec) : 250=87.83%, 500=7.30%, 750=0.37%, 1000=0.37% 00:18:50.294 lat (msec) : 50=4.12% 00:18:50.294 cpu : usr=0.78%, sys=0.49%, ctx=535, majf=0, minf=2 00:18:50.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.295 job2: (groupid=0, jobs=1): err= 0: pid=1625669: Mon Jul 15 19:25:00 2024 00:18:50.295 read: IOPS=506, BW=2027KiB/s (2076kB/s)(2108KiB/1040msec) 00:18:50.295 slat (nsec): min=7458, max=25793, avg=8809.36, stdev=2671.58 00:18:50.295 clat (usec): min=303, max=41136, avg=1504.05, stdev=6763.61 00:18:50.295 lat (usec): min=311, max=41149, avg=1512.86, stdev=6765.92 00:18:50.295 clat percentiles (usec): 00:18:50.295 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:18:50.295 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:18:50.295 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 420], 00:18:50.295 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:50.295 | 99.99th=[41157] 00:18:50.295 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:18:50.295 slat (nsec): min=10522, max=73558, avg=13614.47, stdev=5207.78 00:18:50.295 clat (usec): min=167, max=1304, avg=215.25, stdev=58.16 00:18:50.295 lat (usec): min=180, max=1315, avg=228.86, stdev=58.88 00:18:50.295 clat percentiles (usec): 00:18:50.295 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:18:50.295 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:18:50.295 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 269], 00:18:50.295 | 99.00th=[ 330], 99.50th=[ 396], 99.90th=[ 1303], 99.95th=[ 1303], 00:18:50.295 | 99.99th=[ 1303] 00:18:50.295 bw ( KiB/s): min= 8192, max= 8192, per=69.33%, avg=8192.00, stdev= 0.00, samples=1 00:18:50.295 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:50.295 lat (usec) : 250=60.93%, 500=37.72%, 750=0.26% 00:18:50.295 lat (msec) : 2=0.13%, 50=0.97% 00:18:50.295 cpu : usr=0.48%, sys=3.37%, ctx=1552, majf=0, minf=1 00:18:50.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.295 job3: (groupid=0, jobs=1): err= 0: pid=1625670: Mon Jul 15 19:25:00 2024 00:18:50.295 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:18:50.295 slat (nsec): min=10637, max=25118, avg=21901.91, stdev=2701.10 00:18:50.295 clat (usec): min=36918, max=41139, avg=40783.58, stdev=867.20 00:18:50.295 lat (usec): min=36940, max=41161, avg=40805.48, stdev=867.03 00:18:50.295 clat percentiles (usec): 00:18:50.295 | 1.00th=[36963], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:18:50.295 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:50.295 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:50.295 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:50.295 | 99.99th=[41157] 00:18:50.295 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:50.295 slat (nsec): min=10418, max=32550, avg=12019.84, stdev=2412.39 00:18:50.295 clat (usec): min=153, max=773, avg=189.00, stdev=39.52 00:18:50.295 lat (usec): min=165, max=786, avg=201.02, stdev=39.87 00:18:50.295 clat percentiles (usec): 00:18:50.295 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:18:50.295 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:18:50.295 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:18:50.295 | 99.00th=[ 306], 99.50th=[ 482], 99.90th=[ 775], 99.95th=[ 775], 00:18:50.295 | 99.99th=[ 775] 00:18:50.295 bw ( KiB/s): min= 4087, max= 4087, per=34.59%, avg=4087.00, stdev= 0.00, samples=1 00:18:50.295 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:50.295 lat (usec) : 250=93.82%, 500=1.87%, 1000=0.19% 00:18:50.295 lat (msec) : 50=4.12% 00:18:50.295 cpu : usr=0.70%, sys=0.70%, ctx=535, majf=0, minf=1 00:18:50.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.295 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.295 00:18:50.295 Run status group 0 (all jobs): 00:18:50.295 READ: bw=4219KiB/s (4320kB/s), 85.6KiB/s-2094KiB/s (87.7kB/s-2144kB/s), io=4388KiB (4493kB), run=1005-1040msec 00:18:50.295 WRITE: bw=11.5MiB/s (12.1MB/s), 1992KiB/s-4076KiB/s (2040kB/s-4173kB/s), io=12.0MiB (12.6MB), run=1005-1040msec 00:18:50.295 00:18:50.295 Disk stats (read/write): 00:18:50.295 nvme0n1: ios=571/1024, merge=0/0, ticks=1417/217, in_queue=1634, util=85.57% 00:18:50.295 nvme0n2: ios=67/512, merge=0/0, ticks=1067/109, in_queue=1176, util=89.53% 00:18:50.295 nvme0n3: ios=579/1024, merge=0/0, ticks=791/209, in_queue=1000, util=93.24% 00:18:50.295 nvme0n4: ios=75/512, merge=0/0, ticks=825/81, in_queue=906, util=95.08% 00:18:50.295 19:25:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:50.295 [global] 00:18:50.295 thread=1 00:18:50.295 invalidate=1 00:18:50.295 rw=randwrite 00:18:50.295 time_based=1 00:18:50.295 runtime=1 00:18:50.295 ioengine=libaio 00:18:50.295 direct=1 00:18:50.295 bs=4096 00:18:50.295 iodepth=1 00:18:50.295 norandommap=0 00:18:50.295 numjobs=1 00:18:50.295 00:18:50.295 verify_dump=1 00:18:50.295 verify_backlog=512 00:18:50.295 verify_state_save=0 00:18:50.295 do_verify=1 00:18:50.295 verify=crc32c-intel 00:18:50.295 [job0] 00:18:50.295 filename=/dev/nvme0n1 00:18:50.295 [job1] 00:18:50.295 filename=/dev/nvme0n2 00:18:50.295 [job2] 00:18:50.295 filename=/dev/nvme0n3 00:18:50.295 [job3] 00:18:50.295 filename=/dev/nvme0n4 00:18:50.295 Could not set queue depth (nvme0n1) 00:18:50.295 Could not set queue depth (nvme0n2) 00:18:50.295 Could not set queue depth (nvme0n3) 00:18:50.295 Could not set queue depth (nvme0n4) 00:18:50.295 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.295 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.295 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.295 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.295 fio-3.35 00:18:50.295 Starting 4 threads 00:18:51.665 00:18:51.665 job0: (groupid=0, jobs=1): err= 0: pid=1626039: Mon Jul 15 19:25:02 2024 00:18:51.665 read: IOPS=342, BW=1369KiB/s (1401kB/s)(1396KiB/1020msec) 00:18:51.665 slat (nsec): min=6490, max=23127, avg=8191.72, stdev=3418.23 00:18:51.665 clat (usec): min=251, max=41994, avg=2542.08, stdev=9308.95 00:18:51.665 lat (usec): min=259, max=42014, avg=2550.28, stdev=9311.62 00:18:51.665 clat percentiles (usec): 00:18:51.665 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 277], 00:18:51.665 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:18:51.665 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 449], 95.00th=[41157], 00:18:51.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:51.665 | 99.99th=[42206] 00:18:51.665 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:18:51.665 slat (nsec): min=4424, max=38418, avg=9920.91, stdev=2147.56 00:18:51.665 clat (usec): min=161, max=402, avg=238.16, stdev=32.41 00:18:51.665 lat (usec): min=168, max=412, avg=248.08, stdev=32.52 00:18:51.665 clat percentiles (usec): 00:18:51.665 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 212], 00:18:51.665 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:18:51.665 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:18:51.665 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 404], 99.95th=[ 404], 00:18:51.665 | 99.99th=[ 404] 00:18:51.665 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.665 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.665 lat (usec) : 250=41.81%, 500=54.70%, 750=1.28% 00:18:51.665 lat (msec) : 50=2.21% 00:18:51.665 cpu : usr=0.20%, sys=0.98%, ctx=861, majf=0, minf=1 00:18:51.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.665 issued rwts: total=349,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.665 job1: (groupid=0, jobs=1): err= 0: pid=1626042: Mon Jul 15 19:25:02 2024 00:18:51.665 read: IOPS=86, BW=346KiB/s (355kB/s)(356KiB/1028msec) 00:18:51.665 slat (nsec): min=6836, max=23601, avg=11236.38, stdev=6731.04 00:18:51.665 clat (usec): min=256, max=42052, avg=10013.94, stdev=17520.99 00:18:51.665 lat (usec): min=262, max=42074, avg=10025.18, stdev=17527.51 00:18:51.665 clat percentiles (usec): 00:18:51.665 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:18:51.665 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 351], 00:18:51.665 | 70.00th=[ 367], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:18:51.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:51.665 | 99.99th=[42206] 00:18:51.665 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:18:51.665 slat (nsec): min=9156, max=35183, avg=10352.12, stdev=1749.71 00:18:51.665 clat (usec): min=162, max=1458, avg=250.43, stdev=74.57 00:18:51.665 lat (usec): min=173, max=1468, avg=260.78, stdev=74.68 00:18:51.665 clat percentiles (usec): 00:18:51.665 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 221], 00:18:51.665 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:18:51.665 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 318], 00:18:51.665 | 99.00th=[ 408], 99.50th=[ 725], 99.90th=[ 1467], 99.95th=[ 1467], 00:18:51.665 | 99.99th=[ 1467] 00:18:51.665 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.665 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.665 lat (usec) : 250=55.41%, 500=39.93%, 750=0.83%, 1000=0.17% 00:18:51.665 lat (msec) : 2=0.17%, 50=3.49% 00:18:51.665 cpu : usr=0.19%, sys=0.68%, ctx=602, majf=0, minf=2 00:18:51.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.665 issued rwts: total=89,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.665 job2: (groupid=0, jobs=1): err= 0: pid=1626049: Mon Jul 15 19:25:02 2024 00:18:51.665 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:18:51.665 slat (nsec): min=10731, max=25309, avg=23162.55, stdev=2866.54 00:18:51.665 clat (usec): min=40841, max=42037, avg=41340.18, stdev=491.59 00:18:51.665 lat (usec): min=40865, max=42062, avg=41363.34, stdev=491.83 00:18:51.665 clat percentiles (usec): 00:18:51.665 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:51.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:51.665 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:51.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:51.665 | 99.99th=[42206] 00:18:51.665 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:18:51.665 slat (nsec): min=10691, max=35378, avg=12088.94, stdev=2144.43 00:18:51.666 clat (usec): min=182, max=388, avg=238.55, stdev=25.31 00:18:51.666 lat (usec): min=194, max=420, avg=250.64, stdev=25.95 00:18:51.666 clat percentiles (usec): 00:18:51.666 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 219], 00:18:51.666 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 243], 00:18:51.666 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 277], 00:18:51.666 | 99.00th=[ 322], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 388], 00:18:51.666 | 99.99th=[ 388] 00:18:51.666 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.666 lat (usec) : 250=70.79%, 500=25.09% 00:18:51.666 lat (msec) : 50=4.12% 00:18:51.666 cpu : usr=0.58%, sys=0.77%, ctx=535, majf=0, minf=1 00:18:51.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.666 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.666 job3: (groupid=0, jobs=1): err= 0: pid=1626053: Mon Jul 15 19:25:02 2024 00:18:51.666 read: IOPS=1681, BW=6725KiB/s (6887kB/s)(6732KiB/1001msec) 00:18:51.666 slat (nsec): min=7315, max=34938, avg=8298.22, stdev=1266.72 00:18:51.666 clat (usec): min=291, max=573, avg=333.06, stdev=35.82 00:18:51.666 lat (usec): min=299, max=580, avg=341.35, stdev=35.84 00:18:51.666 clat percentiles (usec): 00:18:51.666 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 314], 00:18:51.666 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:18:51.666 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 424], 00:18:51.666 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 570], 00:18:51.666 | 99.99th=[ 570] 00:18:51.666 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:51.666 slat (nsec): min=10506, max=44158, avg=11777.67, stdev=1907.67 00:18:51.666 clat (usec): min=156, max=335, avg=190.45, stdev=28.52 00:18:51.666 lat (usec): min=168, max=375, avg=202.22, stdev=28.72 00:18:51.666 clat percentiles (usec): 00:18:51.666 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 172], 00:18:51.666 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:18:51.666 | 70.00th=[ 190], 80.00th=[ 208], 90.00th=[ 237], 95.00th=[ 249], 00:18:51.666 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 330], 99.95th=[ 330], 00:18:51.666 | 99.99th=[ 334] 00:18:51.666 bw ( KiB/s): min= 8192, max= 8192, per=59.43%, avg=8192.00, stdev= 0.00, samples=1 00:18:51.666 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:51.666 lat (usec) : 250=52.26%, 500=47.36%, 750=0.38% 00:18:51.666 cpu : usr=3.60%, sys=5.50%, ctx=3732, majf=0, minf=1 00:18:51.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.666 issued rwts: total=1683,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.666 00:18:51.666 Run status group 0 (all jobs): 00:18:51.666 READ: bw=8242KiB/s (8440kB/s), 84.6KiB/s-6725KiB/s (86.6kB/s-6887kB/s), io=8572KiB (8778kB), run=1001-1040msec 00:18:51.666 WRITE: bw=13.5MiB/s (14.1MB/s), 1969KiB/s-8184KiB/s (2016kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1040msec 00:18:51.666 00:18:51.666 Disk stats (read/write): 00:18:51.666 nvme0n1: ios=394/512, merge=0/0, ticks=789/116, in_queue=905, util=91.27% 00:18:51.666 nvme0n2: ios=126/512, merge=0/0, ticks=1000/126, in_queue=1126, util=98.48% 00:18:51.666 nvme0n3: ios=67/512, merge=0/0, ticks=872/115, in_queue=987, util=98.85% 00:18:51.666 nvme0n4: ios=1560/1564, merge=0/0, ticks=1425/282, in_queue=1707, util=99.06% 00:18:51.666 19:25:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:51.666 [global] 00:18:51.666 thread=1 00:18:51.666 invalidate=1 00:18:51.666 rw=write 00:18:51.666 time_based=1 00:18:51.666 runtime=1 00:18:51.666 ioengine=libaio 00:18:51.666 direct=1 00:18:51.666 bs=4096 00:18:51.666 iodepth=128 00:18:51.666 norandommap=0 00:18:51.666 numjobs=1 00:18:51.666 00:18:51.666 verify_dump=1 00:18:51.666 verify_backlog=512 00:18:51.666 verify_state_save=0 00:18:51.666 do_verify=1 00:18:51.666 verify=crc32c-intel 00:18:51.666 [job0] 00:18:51.666 filename=/dev/nvme0n1 00:18:51.666 [job1] 00:18:51.666 filename=/dev/nvme0n2 00:18:51.666 [job2] 00:18:51.666 filename=/dev/nvme0n3 00:18:51.666 [job3] 00:18:51.666 filename=/dev/nvme0n4 00:18:51.666 Could not set queue depth (nvme0n1) 00:18:51.666 Could not set queue depth (nvme0n2) 00:18:51.666 Could not set queue depth (nvme0n3) 00:18:51.666 Could not set queue depth (nvme0n4) 00:18:51.923 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:51.923 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:51.923 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:51.923 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:51.923 fio-3.35 00:18:51.923 Starting 4 threads 00:18:53.317 00:18:53.317 job0: (groupid=0, jobs=1): err= 0: pid=1626467: Mon Jul 15 19:25:03 2024 00:18:53.317 read: IOPS=5218, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1005msec) 00:18:53.317 slat (nsec): min=1095, max=10890k, avg=100520.55, stdev=743019.35 00:18:53.317 clat (usec): min=1629, max=26558, avg=12536.89, stdev=3323.00 00:18:53.317 lat (usec): min=3689, max=26732, avg=12637.41, stdev=3370.85 00:18:53.317 clat percentiles (usec): 00:18:53.317 | 1.00th=[ 5014], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9896], 00:18:53.317 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:18:53.317 | 70.00th=[13435], 80.00th=[15270], 90.00th=[17957], 95.00th=[18482], 00:18:53.317 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22676], 99.95th=[22938], 00:18:53.317 | 99.99th=[26608] 00:18:53.317 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:18:53.317 slat (usec): min=2, max=14971, avg=77.22, stdev=479.77 00:18:53.317 clat (usec): min=2344, max=25957, avg=10922.27, stdev=3226.70 00:18:53.317 lat (usec): min=2349, max=26044, avg=10999.49, stdev=3242.04 00:18:53.317 clat percentiles (usec): 00:18:53.317 | 1.00th=[ 3621], 5.00th=[ 5669], 10.00th=[ 7242], 20.00th=[ 8979], 00:18:53.317 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11600], 00:18:53.317 | 70.00th=[11994], 80.00th=[12387], 90.00th=[14353], 95.00th=[16712], 00:18:53.317 | 99.00th=[23462], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:18:53.317 | 99.99th=[26084] 00:18:53.317 bw ( KiB/s): min=21880, max=23152, per=28.96%, avg=22516.00, stdev=899.44, samples=2 00:18:53.317 iops : min= 5470, max= 5788, avg=5629.00, stdev=224.86, samples=2 00:18:53.317 lat (msec) : 2=0.01%, 4=1.17%, 10=26.38%, 20=70.39%, 50=2.06% 00:18:53.317 cpu : usr=3.88%, sys=5.78%, ctx=587, majf=0, minf=1 00:18:53.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:53.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.317 issued rwts: total=5245,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.317 job1: (groupid=0, jobs=1): err= 0: pid=1626480: Mon Jul 15 19:25:03 2024 00:18:53.317 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:18:53.317 slat (nsec): min=1349, max=18434k, avg=117928.56, stdev=934612.32 00:18:53.317 clat (usec): min=2824, max=40171, avg=14841.82, stdev=5718.60 00:18:53.317 lat (usec): min=3813, max=40177, avg=14959.74, stdev=5780.31 00:18:53.317 clat percentiles (usec): 00:18:53.317 | 1.00th=[ 4883], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11338], 00:18:53.317 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13566], 00:18:53.317 | 70.00th=[16712], 80.00th=[20055], 90.00th=[21103], 95.00th=[27132], 00:18:53.317 | 99.00th=[36963], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:18:53.317 | 99.99th=[40109] 00:18:53.317 write: IOPS=4784, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1005msec); 0 zone resets 00:18:53.317 slat (usec): min=2, max=17452, avg=87.04, stdev=582.83 00:18:53.317 clat (usec): min=2342, max=38919, avg=12208.16, stdev=5535.18 00:18:53.317 lat (usec): min=2351, max=38925, avg=12295.19, stdev=5582.42 00:18:53.317 clat percentiles (usec): 00:18:53.317 | 1.00th=[ 3195], 5.00th=[ 5473], 10.00th=[ 6718], 20.00th=[ 9241], 00:18:53.317 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:18:53.317 | 70.00th=[12125], 80.00th=[12780], 90.00th=[17957], 95.00th=[22676], 00:18:53.317 | 99.00th=[34866], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:18:53.317 | 99.99th=[39060] 00:18:53.317 bw ( KiB/s): min=17368, max=20272, per=24.20%, avg=18820.00, stdev=2053.44, samples=2 00:18:53.317 iops : min= 4342, max= 5068, avg=4705.00, stdev=513.36, samples=2 00:18:53.317 lat (msec) : 4=1.18%, 10=14.82%, 20=70.09%, 50=13.91% 00:18:53.317 cpu : usr=3.69%, sys=5.38%, ctx=506, majf=0, minf=1 00:18:53.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:53.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.317 issued rwts: total=4608,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.317 job2: (groupid=0, jobs=1): err= 0: pid=1626497: Mon Jul 15 19:25:03 2024 00:18:53.317 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:18:53.317 slat (nsec): min=1083, max=12844k, avg=121621.42, stdev=895377.94 00:18:53.317 clat (usec): min=1152, max=76474, avg=16215.89, stdev=8562.28 00:18:53.317 lat (usec): min=1156, max=76477, avg=16337.52, stdev=8608.85 00:18:53.317 clat percentiles (usec): 00:18:53.317 | 1.00th=[ 4686], 5.00th=[ 9110], 10.00th=[11207], 20.00th=[11994], 00:18:53.317 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13960], 60.00th=[14091], 00:18:53.317 | 70.00th=[15664], 80.00th=[19006], 90.00th=[24249], 95.00th=[28967], 00:18:53.317 | 99.00th=[64226], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:18:53.317 | 99.99th=[76022] 00:18:53.317 write: IOPS=4499, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1002msec); 0 zone resets 00:18:53.317 slat (nsec): min=1994, max=15313k, avg=86548.41, stdev=629096.48 00:18:53.317 clat (usec): min=588, max=76474, avg=13391.04, stdev=7688.99 00:18:53.317 lat (usec): min=696, max=76478, avg=13477.58, stdev=7711.70 00:18:53.318 clat percentiles (usec): 00:18:53.318 | 1.00th=[ 2769], 5.00th=[ 5211], 10.00th=[ 7046], 20.00th=[ 9241], 00:18:53.318 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12911], 60.00th=[13698], 00:18:53.318 | 70.00th=[13829], 80.00th=[14746], 90.00th=[18744], 95.00th=[25822], 00:18:53.318 | 99.00th=[53740], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:18:53.318 | 99.99th=[76022] 00:18:53.318 bw ( KiB/s): min=17520, max=17520, per=22.53%, avg=17520.00, stdev= 0.00, samples=1 00:18:53.318 iops : min= 4380, max= 4380, avg=4380.00, stdev= 0.00, samples=1 00:18:53.318 lat (usec) : 750=0.05%, 1000=0.05% 00:18:53.318 lat (msec) : 2=0.35%, 4=0.91%, 10=15.07%, 20=70.27%, 50=12.01% 00:18:53.318 lat (msec) : 100=1.30% 00:18:53.318 cpu : usr=5.19%, sys=3.30%, ctx=437, majf=0, minf=1 00:18:53.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:53.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.318 issued rwts: total=4096,4508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.318 job3: (groupid=0, jobs=1): err= 0: pid=1626502: Mon Jul 15 19:25:03 2024 00:18:53.318 read: IOPS=4373, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1006msec) 00:18:53.318 slat (nsec): min=1066, max=17876k, avg=105222.62, stdev=888979.59 00:18:53.318 clat (usec): min=882, max=42039, avg=14417.37, stdev=5829.55 00:18:53.318 lat (usec): min=912, max=42046, avg=14522.59, stdev=5902.64 00:18:53.318 clat percentiles (usec): 00:18:53.318 | 1.00th=[ 3064], 5.00th=[ 5342], 10.00th=[ 6456], 20.00th=[10290], 00:18:53.318 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[14877], 00:18:53.318 | 70.00th=[16057], 80.00th=[19268], 90.00th=[21365], 95.00th=[23725], 00:18:53.318 | 99.00th=[30016], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:18:53.318 | 99.99th=[42206] 00:18:53.318 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:18:53.318 slat (nsec): min=1856, max=19967k, avg=92806.77, stdev=647858.22 00:18:53.318 clat (usec): min=419, max=70461, avg=13944.10, stdev=8652.21 00:18:53.318 lat (usec): min=710, max=70468, avg=14036.91, stdev=8702.45 00:18:53.318 clat percentiles (usec): 00:18:53.318 | 1.00th=[ 2442], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 8586], 00:18:53.318 | 30.00th=[10683], 40.00th=[12518], 50.00th=[13566], 60.00th=[13829], 00:18:53.318 | 70.00th=[14091], 80.00th=[14615], 90.00th=[19792], 95.00th=[23462], 00:18:53.318 | 99.00th=[58983], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:18:53.318 | 99.99th=[70779] 00:18:53.318 bw ( KiB/s): min=17416, max=19448, per=23.70%, avg=18432.00, stdev=1436.84, samples=2 00:18:53.318 iops : min= 4354, max= 4862, avg=4608.00, stdev=359.21, samples=2 00:18:53.318 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.11% 00:18:53.318 lat (msec) : 2=0.41%, 4=1.47%, 10=21.27%, 20=63.97%, 50=11.69% 00:18:53.318 lat (msec) : 100=1.03% 00:18:53.318 cpu : usr=3.18%, sys=5.37%, ctx=474, majf=0, minf=1 00:18:53.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:53.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.318 issued rwts: total=4400,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.318 00:18:53.318 Run status group 0 (all jobs): 00:18:53.318 READ: bw=71.2MiB/s (74.7MB/s), 16.0MiB/s-20.4MiB/s (16.7MB/s-21.4MB/s), io=71.7MiB (75.2MB), run=1002-1006msec 00:18:53.318 WRITE: bw=75.9MiB/s (79.6MB/s), 17.6MiB/s-21.9MiB/s (18.4MB/s-23.0MB/s), io=76.4MiB (80.1MB), run=1002-1006msec 00:18:53.318 00:18:53.318 Disk stats (read/write): 00:18:53.318 nvme0n1: ios=4611/4615, merge=0/0, ticks=57688/48429, in_queue=106117, util=96.19% 00:18:53.318 nvme0n2: ios=4121/4168, merge=0/0, ticks=54043/51080, in_queue=105123, util=100.00% 00:18:53.318 nvme0n3: ios=3630/4023, merge=0/0, ticks=55286/50663, in_queue=105949, util=99.69% 00:18:53.318 nvme0n4: ios=3584/3919, merge=0/0, ticks=52250/49355, in_queue=101605, util=89.64% 00:18:53.318 19:25:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:53.318 [global] 00:18:53.318 thread=1 00:18:53.318 invalidate=1 00:18:53.318 rw=randwrite 00:18:53.318 time_based=1 00:18:53.318 runtime=1 00:18:53.318 ioengine=libaio 00:18:53.318 direct=1 00:18:53.318 bs=4096 00:18:53.318 iodepth=128 00:18:53.318 norandommap=0 00:18:53.318 numjobs=1 00:18:53.318 00:18:53.318 verify_dump=1 00:18:53.318 verify_backlog=512 00:18:53.318 verify_state_save=0 00:18:53.318 do_verify=1 00:18:53.318 verify=crc32c-intel 00:18:53.318 [job0] 00:18:53.318 filename=/dev/nvme0n1 00:18:53.318 [job1] 00:18:53.318 filename=/dev/nvme0n2 00:18:53.318 [job2] 00:18:53.318 filename=/dev/nvme0n3 00:18:53.318 [job3] 00:18:53.318 filename=/dev/nvme0n4 00:18:53.318 Could not set queue depth (nvme0n1) 00:18:53.318 Could not set queue depth (nvme0n2) 00:18:53.318 Could not set queue depth (nvme0n3) 00:18:53.318 Could not set queue depth (nvme0n4) 00:18:53.583 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.583 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.583 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.583 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.583 fio-3.35 00:18:53.583 Starting 4 threads 00:18:54.953 00:18:54.953 job0: (groupid=0, jobs=1): err= 0: pid=1626896: Mon Jul 15 19:25:05 2024 00:18:54.953 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1004msec) 00:18:54.953 slat (nsec): min=1356, max=14826k, avg=134800.02, stdev=906850.51 00:18:54.953 clat (usec): min=1937, max=41800, avg=15634.07, stdev=5410.90 00:18:54.953 lat (usec): min=4799, max=41803, avg=15768.87, stdev=5490.78 00:18:54.953 clat percentiles (usec): 00:18:54.953 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11469], 20.00th=[12125], 00:18:54.953 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[14484], 00:18:54.953 | 70.00th=[16909], 80.00th=[19792], 90.00th=[22938], 95.00th=[26870], 00:18:54.953 | 99.00th=[33817], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:18:54.953 | 99.99th=[41681] 00:18:54.953 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:18:54.953 slat (usec): min=2, max=18450, avg=153.38, stdev=831.00 00:18:54.953 clat (usec): min=3392, max=59417, avg=21567.28, stdev=10515.32 00:18:54.953 lat (usec): min=3402, max=59423, avg=21720.66, stdev=10573.29 00:18:54.953 clat percentiles (usec): 00:18:54.953 | 1.00th=[ 5276], 5.00th=[ 7635], 10.00th=[ 9896], 20.00th=[11731], 00:18:54.954 | 30.00th=[16450], 40.00th=[19530], 50.00th=[20841], 60.00th=[21365], 00:18:54.954 | 70.00th=[22414], 80.00th=[28181], 90.00th=[33162], 95.00th=[43779], 00:18:54.954 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:18:54.954 | 99.99th=[59507] 00:18:54.954 bw ( KiB/s): min=12288, max=16368, per=20.74%, avg=14328.00, stdev=2885.00, samples=2 00:18:54.954 iops : min= 3072, max= 4092, avg=3582.00, stdev=721.25, samples=2 00:18:54.954 lat (msec) : 2=0.01%, 4=0.19%, 10=7.09%, 20=54.56%, 50=36.38% 00:18:54.954 lat (msec) : 100=1.77% 00:18:54.954 cpu : usr=3.09%, sys=3.79%, ctx=407, majf=0, minf=1 00:18:54.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.954 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.954 job1: (groupid=0, jobs=1): err= 0: pid=1626903: Mon Jul 15 19:25:05 2024 00:18:54.954 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:18:54.954 slat (nsec): min=1037, max=12992k, avg=80191.90, stdev=538638.53 00:18:54.954 clat (usec): min=2250, max=28698, avg=10741.78, stdev=2780.54 00:18:54.954 lat (usec): min=2263, max=28704, avg=10821.97, stdev=2799.95 00:18:54.954 clat percentiles (usec): 00:18:54.954 | 1.00th=[ 3720], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 8979], 00:18:54.954 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:18:54.954 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13304], 95.00th=[15008], 00:18:54.954 | 99.00th=[20317], 99.50th=[25822], 99.90th=[28443], 99.95th=[28705], 00:18:54.954 | 99.99th=[28705] 00:18:54.954 write: IOPS=6388, BW=25.0MiB/s (26.2MB/s)(25.0MiB/1001msec); 0 zone resets 00:18:54.954 slat (nsec): min=1795, max=8315.2k, avg=66025.51, stdev=334142.23 00:18:54.954 clat (usec): min=416, max=26381, avg=9547.01, stdev=3097.11 00:18:54.954 lat (usec): min=443, max=26385, avg=9613.04, stdev=3115.25 00:18:54.954 clat percentiles (usec): 00:18:54.954 | 1.00th=[ 1631], 5.00th=[ 3392], 10.00th=[ 5211], 20.00th=[ 7832], 00:18:54.954 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10552], 00:18:54.954 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11600], 95.00th=[12911], 00:18:54.954 | 99.00th=[20841], 99.50th=[22938], 99.90th=[25822], 99.95th=[25822], 00:18:54.954 | 99.99th=[26346] 00:18:54.954 bw ( KiB/s): min=22608, max=27536, per=36.29%, avg=25072.00, stdev=3484.62, samples=2 00:18:54.954 iops : min= 5652, max= 6884, avg=6268.00, stdev=871.16, samples=2 00:18:54.954 lat (usec) : 500=0.02%, 750=0.06%, 1000=0.08% 00:18:54.954 lat (msec) : 2=0.79%, 4=3.77%, 10=32.39%, 20=61.49%, 50=1.41% 00:18:54.954 cpu : usr=3.70%, sys=6.80%, ctx=600, majf=0, minf=1 00:18:54.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:54.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.954 issued rwts: total=6144,6395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.954 job2: (groupid=0, jobs=1): err= 0: pid=1626919: Mon Jul 15 19:25:05 2024 00:18:54.954 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:18:54.954 slat (nsec): min=1356, max=13468k, avg=97765.43, stdev=708515.34 00:18:54.954 clat (usec): min=3229, max=30086, avg=12862.80, stdev=3685.81 00:18:54.954 lat (usec): min=3252, max=30094, avg=12960.56, stdev=3725.63 00:18:54.954 clat percentiles (usec): 00:18:54.954 | 1.00th=[ 4047], 5.00th=[ 6390], 10.00th=[ 9372], 20.00th=[10945], 00:18:54.954 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12387], 00:18:54.954 | 70.00th=[13698], 80.00th=[16057], 90.00th=[17957], 95.00th=[19792], 00:18:54.954 | 99.00th=[22152], 99.50th=[25560], 99.90th=[30016], 99.95th=[30016], 00:18:54.954 | 99.99th=[30016] 00:18:54.954 write: IOPS=5339, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1008msec); 0 zone resets 00:18:54.954 slat (usec): min=2, max=14424, avg=82.02, stdev=454.64 00:18:54.954 clat (usec): min=150, max=32581, avg=11522.66, stdev=4657.97 00:18:54.954 lat (usec): min=1168, max=32586, avg=11604.68, stdev=4685.66 00:18:54.954 clat percentiles (usec): 00:18:54.954 | 1.00th=[ 3261], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 8356], 00:18:54.954 | 30.00th=[10421], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:18:54.954 | 70.00th=[12125], 80.00th=[12387], 90.00th=[14091], 95.00th=[22152], 00:18:54.954 | 99.00th=[29230], 99.50th=[30278], 99.90th=[32637], 99.95th=[32637], 00:18:54.954 | 99.99th=[32637] 00:18:54.954 bw ( KiB/s): min=20521, max=21560, per=30.46%, avg=21040.50, stdev=734.68, samples=2 00:18:54.954 iops : min= 5130, max= 5390, avg=5260.00, stdev=183.85, samples=2 00:18:54.954 lat (usec) : 250=0.01%, 1000=0.01% 00:18:54.954 lat (msec) : 2=0.18%, 4=1.14%, 10=19.58%, 20=73.43%, 50=5.65% 00:18:54.954 cpu : usr=3.57%, sys=6.06%, ctx=603, majf=0, minf=1 00:18:54.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:54.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.954 issued rwts: total=5120,5382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.954 job3: (groupid=0, jobs=1): err= 0: pid=1626925: Mon Jul 15 19:25:05 2024 00:18:54.954 read: IOPS=1945, BW=7781KiB/s (7968kB/s)(7812KiB/1004msec) 00:18:54.954 slat (usec): min=4, max=41229, avg=266.00, stdev=1906.93 00:18:54.954 clat (usec): min=433, max=116498, avg=34579.96, stdev=22283.78 00:18:54.954 lat (msec): min=11, max=116, avg=34.85, stdev=22.34 00:18:54.954 clat percentiles (msec): 00:18:54.954 | 1.00th=[ 12], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 22], 00:18:54.954 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 23], 60.00th=[ 30], 00:18:54.954 | 70.00th=[ 36], 80.00th=[ 50], 90.00th=[ 59], 95.00th=[ 82], 00:18:54.954 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:18:54.954 | 99.99th=[ 117] 00:18:54.954 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:18:54.954 slat (usec): min=4, max=30700, avg=228.09, stdev=1551.63 00:18:54.954 clat (usec): min=10087, max=79969, avg=27514.19, stdev=17068.63 00:18:54.954 lat (usec): min=12805, max=79981, avg=27742.28, stdev=17151.46 00:18:54.954 clat percentiles (usec): 00:18:54.954 | 1.00th=[12649], 5.00th=[13173], 10.00th=[14746], 20.00th=[15926], 00:18:54.954 | 30.00th=[16319], 40.00th=[16581], 50.00th=[17695], 60.00th=[21627], 00:18:54.954 | 70.00th=[31065], 80.00th=[42730], 90.00th=[57410], 95.00th=[59507], 00:18:54.954 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:18:54.954 | 99.99th=[80217] 00:18:54.954 bw ( KiB/s): min= 8192, max= 8192, per=11.86%, avg=8192.00, stdev= 0.00, samples=2 00:18:54.954 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:54.954 lat (usec) : 500=0.02% 00:18:54.954 lat (msec) : 20=33.92%, 50=48.81%, 100=15.67%, 250=1.57% 00:18:54.954 cpu : usr=2.39%, sys=3.29%, ctx=133, majf=0, minf=1 00:18:54.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:54.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.954 issued rwts: total=1953,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.954 00:18:54.954 Run status group 0 (all jobs): 00:18:54.954 READ: bw=63.6MiB/s (66.7MB/s), 7781KiB/s-24.0MiB/s (7968kB/s-25.1MB/s), io=64.1MiB (67.2MB), run=1001-1008msec 00:18:54.954 WRITE: bw=67.5MiB/s (70.7MB/s), 8159KiB/s-25.0MiB/s (8355kB/s-26.2MB/s), io=68.0MiB (71.3MB), run=1001-1008msec 00:18:54.954 00:18:54.954 Disk stats (read/write): 00:18:54.954 nvme0n1: ios=2612/3072, merge=0/0, ticks=40306/65219, in_queue=105525, util=94.19% 00:18:54.954 nvme0n2: ios=5206/5632, merge=0/0, ticks=32001/25832, in_queue=57833, util=98.58% 00:18:54.954 nvme0n3: ios=4145/4608, merge=0/0, ticks=48108/45906, in_queue=94014, util=97.82% 00:18:54.954 nvme0n4: ios=1593/1696, merge=0/0, ticks=15122/12077, in_queue=27199, util=98.43% 00:18:54.954 19:25:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:54.954 19:25:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1627036 00:18:54.954 19:25:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:54.954 19:25:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:54.954 [global] 00:18:54.954 thread=1 00:18:54.954 invalidate=1 00:18:54.954 rw=read 00:18:54.954 time_based=1 00:18:54.954 runtime=10 00:18:54.954 ioengine=libaio 00:18:54.954 direct=1 00:18:54.954 bs=4096 00:18:54.954 iodepth=1 00:18:54.954 norandommap=1 00:18:54.954 numjobs=1 00:18:54.954 00:18:54.954 [job0] 00:18:54.954 filename=/dev/nvme0n1 00:18:54.954 [job1] 00:18:54.954 filename=/dev/nvme0n2 00:18:54.954 [job2] 00:18:54.954 filename=/dev/nvme0n3 00:18:54.954 [job3] 00:18:54.954 filename=/dev/nvme0n4 00:18:54.954 Could not set queue depth (nvme0n1) 00:18:54.954 Could not set queue depth (nvme0n2) 00:18:54.954 Could not set queue depth (nvme0n3) 00:18:54.954 Could not set queue depth (nvme0n4) 00:18:54.954 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.954 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.954 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.954 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.954 fio-3.35 00:18:54.954 Starting 4 threads 00:18:58.232 19:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:58.232 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1163264, buflen=4096 00:18:58.232 fio: pid=1627380, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:58.232 19:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:58.232 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=651264, buflen=4096 00:18:58.232 fio: pid=1627379, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:58.232 19:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.232 19:25:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:58.232 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=19734528, buflen=4096 00:18:58.232 fio: pid=1627343, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:58.232 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.232 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:58.490 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=327680, buflen=4096 00:18:58.490 fio: pid=1627360, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:58.490 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.490 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:58.490 00:18:58.490 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1627343: Mon Jul 15 19:25:09 2024 00:18:58.490 read: IOPS=1562, BW=6249KiB/s (6399kB/s)(18.8MiB/3084msec) 00:18:58.490 slat (usec): min=6, max=29113, avg=16.22, stdev=469.18 00:18:58.490 clat (usec): min=224, max=42472, avg=618.27, stdev=3720.12 00:18:58.490 lat (usec): min=231, max=42481, avg=634.49, stdev=3750.33 00:18:58.490 clat percentiles (usec): 00:18:58.490 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 265], 00:18:58.490 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:18:58.490 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 297], 00:18:58.490 | 99.00th=[ 474], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:58.490 | 99.99th=[42730] 00:18:58.490 bw ( KiB/s): min= 96, max=14080, per=74.44%, avg=4843.20, stdev=6667.43, samples=5 00:18:58.490 iops : min= 24, max= 3520, avg=1210.80, stdev=1666.86, samples=5 00:18:58.490 lat (usec) : 250=0.81%, 500=98.24%, 750=0.06% 00:18:58.490 lat (msec) : 4=0.02%, 10=0.02%, 50=0.83% 00:18:58.490 cpu : usr=0.42%, sys=1.36%, ctx=4822, majf=0, minf=1 00:18:58.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.490 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.490 issued rwts: total=4819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.490 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1627360: Mon Jul 15 19:25:09 2024 00:18:58.490 read: IOPS=24, BW=97.4KiB/s (99.8kB/s)(320KiB/3284msec) 00:18:58.490 slat (usec): min=12, max=10798, avg=156.00, stdev=1197.28 00:18:58.490 clat (usec): min=574, max=45063, avg=40625.28, stdev=4575.87 00:18:58.490 lat (usec): min=634, max=52047, avg=40782.37, stdev=4745.64 00:18:58.490 clat percentiles (usec): 00:18:58.490 | 1.00th=[ 578], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:58.490 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.490 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:18:58.490 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:18:58.490 | 99.99th=[44827] 00:18:58.490 bw ( KiB/s): min= 96, max= 104, per=1.51%, avg=98.00, stdev= 3.35, samples=6 00:18:58.490 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:18:58.490 lat (usec) : 750=1.23% 00:18:58.490 lat (msec) : 50=97.53% 00:18:58.491 cpu : usr=0.12%, sys=0.00%, ctx=84, majf=0, minf=1 00:18:58.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.491 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1627379: Mon Jul 15 19:25:09 2024 00:18:58.491 read: IOPS=55, BW=219KiB/s (224kB/s)(636KiB/2902msec) 00:18:58.491 slat (nsec): min=7543, max=74566, avg=15630.07, stdev=8629.28 00:18:58.491 clat (usec): min=304, max=42042, avg=18077.49, stdev=20327.60 00:18:58.491 lat (usec): min=312, max=42065, avg=18093.07, stdev=20334.83 00:18:58.491 clat percentiles (usec): 00:18:58.491 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:18:58.491 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[41157], 00:18:58.491 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:18:58.491 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:58.491 | 99.99th=[42206] 00:18:58.491 bw ( KiB/s): min= 96, max= 104, per=1.49%, avg=97.60, stdev= 3.58, samples=5 00:18:58.491 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:18:58.491 lat (usec) : 500=55.00%, 750=0.62%, 1000=0.62% 00:18:58.491 lat (msec) : 50=43.12% 00:18:58.491 cpu : usr=0.00%, sys=0.17%, ctx=165, majf=0, minf=1 00:18:58.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.491 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1627380: Mon Jul 15 19:25:09 2024 00:18:58.491 read: IOPS=105, BW=420KiB/s (431kB/s)(1136KiB/2702msec) 00:18:58.491 slat (nsec): min=7939, max=37015, avg=12602.00, stdev=6194.35 00:18:58.491 clat (usec): min=290, max=42070, avg=9394.66, stdev=16976.35 00:18:58.491 lat (usec): min=298, max=42092, avg=9407.28, stdev=16981.35 00:18:58.491 clat percentiles (usec): 00:18:58.491 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:18:58.491 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:18:58.491 | 70.00th=[ 338], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:58.491 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:58.491 | 99.99th=[42206] 00:18:58.491 bw ( KiB/s): min= 96, max= 112, per=1.57%, avg=102.40, stdev= 6.69, samples=5 00:18:58.491 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:18:58.491 lat (usec) : 500=75.79%, 750=1.05% 00:18:58.491 lat (msec) : 2=0.35%, 4=0.35%, 50=22.11% 00:18:58.491 cpu : usr=0.00%, sys=0.30%, ctx=287, majf=0, minf=2 00:18:58.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.491 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.491 00:18:58.491 Run status group 0 (all jobs): 00:18:58.491 READ: bw=6505KiB/s (6662kB/s), 97.4KiB/s-6249KiB/s (99.8kB/s-6399kB/s), io=20.9MiB (21.9MB), run=2702-3284msec 00:18:58.491 00:18:58.491 Disk stats (read/write): 00:18:58.491 nvme0n1: ios=4072/0, merge=0/0, ticks=2749/0, in_queue=2749, util=93.72% 00:18:58.491 nvme0n2: ios=76/0, merge=0/0, ticks=3084/0, in_queue=3084, util=95.70% 00:18:58.491 nvme0n3: ios=206/0, merge=0/0, ticks=3718/0, in_queue=3718, util=99.19% 00:18:58.491 nvme0n4: ios=114/0, merge=0/0, ticks=3601/0, in_queue=3601, util=99.48% 00:18:58.749 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:58.749 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:59.006 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.006 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:59.006 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.006 19:25:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:59.263 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.263 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1627036 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:59.521 nvmf hotplug test: fio failed as expected 00:18:59.521 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.778 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:59.778 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.779 rmmod nvme_tcp 00:18:59.779 rmmod nvme_fabrics 00:18:59.779 rmmod nvme_keyring 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1624444 ']' 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1624444 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1624444 ']' 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1624444 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.779 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1624444 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1624444' 00:19:00.036 killing process with pid 1624444 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1624444 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1624444 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.036 19:25:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.589 19:25:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:02.589 00:19:02.589 real 0m25.387s 00:19:02.589 user 1m43.674s 00:19:02.589 sys 0m7.341s 00:19:02.589 19:25:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:02.589 19:25:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.589 ************************************ 00:19:02.589 END TEST nvmf_fio_target 00:19:02.589 ************************************ 00:19:02.589 19:25:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:02.589 19:25:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:02.589 19:25:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:02.589 19:25:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.589 19:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.589 ************************************ 00:19:02.589 START TEST nvmf_bdevio 00:19:02.589 ************************************ 00:19:02.589 19:25:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:02.589 * Looking for test storage... 00:19:02.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.589 19:25:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:07.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:07.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:07.870 Found net devices under 0000:86:00.0: cvl_0_0 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:07.870 Found net devices under 0000:86:00.1: cvl_0_1 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:19:07.870 00:19:07.870 --- 10.0.0.2 ping statistics --- 00:19:07.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.870 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:07.870 00:19:07.870 --- 10.0.0.1 ping statistics --- 00:19:07.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.870 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1631605 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1631605 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1631605 ']' 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.870 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.129 [2024-07-15 19:25:18.753560] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:08.129 [2024-07-15 19:25:18.753601] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.129 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.129 [2024-07-15 19:25:18.782818] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:08.129 [2024-07-15 19:25:18.811342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.129 [2024-07-15 19:25:18.852892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.129 [2024-07-15 19:25:18.852929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.129 [2024-07-15 19:25:18.852936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.129 [2024-07-15 19:25:18.852942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.129 [2024-07-15 19:25:18.852948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.129 [2024-07-15 19:25:18.853000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.129 [2024-07-15 19:25:18.853036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.129 [2024-07-15 19:25:18.853141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.129 [2024-07-15 19:25:18.853143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.129 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.387 [2024-07-15 19:25:18.987327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.387 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.387 19:25:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:08.387 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.387 19:25:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.387 Malloc0 00:19:08.387 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.387 19:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.387 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.387 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.388 [2024-07-15 19:25:19.038568] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:08.388 { 00:19:08.388 "params": { 00:19:08.388 "name": "Nvme$subsystem", 00:19:08.388 "trtype": "$TEST_TRANSPORT", 00:19:08.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:08.388 "adrfam": "ipv4", 00:19:08.388 "trsvcid": "$NVMF_PORT", 00:19:08.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:08.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:08.388 "hdgst": ${hdgst:-false}, 00:19:08.388 "ddgst": ${ddgst:-false} 00:19:08.388 }, 00:19:08.388 "method": "bdev_nvme_attach_controller" 00:19:08.388 } 00:19:08.388 EOF 00:19:08.388 )") 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:08.388 19:25:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:08.388 "params": { 00:19:08.388 "name": "Nvme1", 00:19:08.388 "trtype": "tcp", 00:19:08.388 "traddr": "10.0.0.2", 00:19:08.388 "adrfam": "ipv4", 00:19:08.388 "trsvcid": "4420", 00:19:08.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.388 "hdgst": false, 00:19:08.388 "ddgst": false 00:19:08.388 }, 00:19:08.388 "method": "bdev_nvme_attach_controller" 00:19:08.388 }' 00:19:08.388 [2024-07-15 19:25:19.086373] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:19:08.388 [2024-07-15 19:25:19.086428] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631637 ] 00:19:08.388 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.388 [2024-07-15 19:25:19.112407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:08.388 [2024-07-15 19:25:19.140336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:08.388 [2024-07-15 19:25:19.182233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.388 [2024-07-15 19:25:19.182251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.388 [2024-07-15 19:25:19.182254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.645 I/O targets: 00:19:08.645 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:08.645 00:19:08.645 00:19:08.645 CUnit - A unit testing framework for C - Version 2.1-3 00:19:08.645 http://cunit.sourceforge.net/ 00:19:08.645 00:19:08.645 00:19:08.645 Suite: bdevio tests on: Nvme1n1 00:19:08.645 Test: blockdev write read block ...passed 00:19:08.933 Test: blockdev write zeroes read block ...passed 00:19:08.933 Test: blockdev write zeroes read no split ...passed 00:19:08.933 Test: blockdev write zeroes read split ...passed 00:19:08.933 Test: blockdev write zeroes read split partial ...passed 00:19:08.933 Test: blockdev reset ...[2024-07-15 19:25:19.530235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.933 [2024-07-15 19:25:19.530298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84ffb0 (9): Bad file descriptor 00:19:08.933 [2024-07-15 19:25:19.586246] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:08.933 passed 00:19:08.933 Test: blockdev write read 8 blocks ...passed 00:19:08.933 Test: blockdev write read size > 128k ...passed 00:19:08.933 Test: blockdev write read invalid size ...passed 00:19:08.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:08.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:08.933 Test: blockdev write read max offset ...passed 00:19:09.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:09.191 Test: blockdev writev readv 8 blocks ...passed 00:19:09.191 Test: blockdev writev readv 30 x 1block ...passed 00:19:09.191 Test: blockdev writev readv block ...passed 00:19:09.191 Test: blockdev writev readv size > 128k ...passed 00:19:09.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:09.191 Test: blockdev comparev and writev ...[2024-07-15 19:25:19.838890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.191 [2024-07-15 19:25:19.838920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.191 [2024-07-15 19:25:19.838934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.838946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.839276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.839298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.839306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.839632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.839643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.839654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.840001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.840013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.840024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.192 [2024-07-15 19:25:19.840032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.192 passed 00:19:09.192 Test: blockdev nvme passthru rw ...passed 00:19:09.192 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:25:19.922615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.192 [2024-07-15 19:25:19.922633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.922826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.192 [2024-07-15 19:25:19.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.923028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.192 [2024-07-15 19:25:19.923038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.192 [2024-07-15 19:25:19.923233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.192 [2024-07-15 19:25:19.923243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.192 passed 00:19:09.192 Test: blockdev nvme admin passthru ...passed 00:19:09.192 Test: blockdev copy ...passed 00:19:09.192 00:19:09.192 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.192 suites 1 1 n/a 0 0 00:19:09.192 tests 23 23 23 0 0 00:19:09.192 asserts 152 152 152 0 n/a 00:19:09.192 00:19:09.192 Elapsed time = 1.124 seconds 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.450 rmmod nvme_tcp 00:19:09.450 rmmod nvme_fabrics 00:19:09.450 rmmod nvme_keyring 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1631605 ']' 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1631605 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1631605 ']' 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1631605 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631605 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631605' 00:19:09.450 killing process with pid 1631605 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1631605 00:19:09.450 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1631605 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.710 19:25:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.243 19:25:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.243 00:19:12.243 real 0m9.547s 00:19:12.243 user 0m10.000s 00:19:12.243 sys 0m4.673s 00:19:12.243 19:25:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.243 19:25:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.243 ************************************ 00:19:12.243 END TEST nvmf_bdevio 00:19:12.243 ************************************ 00:19:12.243 19:25:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:12.243 19:25:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.243 19:25:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:12.243 19:25:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.243 19:25:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.243 ************************************ 00:19:12.243 START TEST nvmf_auth_target 00:19:12.243 ************************************ 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.243 * Looking for test storage... 00:19:12.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.243 19:25:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.244 19:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.511 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:17.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:17.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:17.512 Found net devices under 0000:86:00.0: cvl_0_0 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:17.512 Found net devices under 0000:86:00.1: cvl_0_1 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:17.512 00:19:17.512 --- 10.0.0.2 ping statistics --- 00:19:17.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.512 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:19:17.512 00:19:17.512 --- 10.0.0.1 ping statistics --- 00:19:17.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.512 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.512 19:25:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1635165 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1635165 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1635165 ']' 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1635317 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ca70528c3702a0bdd769cb163415e3ab9cbec6c0e01717c9 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CBL 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ca70528c3702a0bdd769cb163415e3ab9cbec6c0e01717c9 0 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ca70528c3702a0bdd769cb163415e3ab9cbec6c0e01717c9 0 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.512 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ca70528c3702a0bdd769cb163415e3ab9cbec6c0e01717c9 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CBL 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CBL 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.CBL 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=caaf9e8781cb017dab066c1d9de9a4d00869bf6d3d4ff35585c807b9b8b1ec73 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1gv 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key caaf9e8781cb017dab066c1d9de9a4d00869bf6d3d4ff35585c807b9b8b1ec73 3 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 caaf9e8781cb017dab066c1d9de9a4d00869bf6d3d4ff35585c807b9b8b1ec73 3 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=caaf9e8781cb017dab066c1d9de9a4d00869bf6d3d4ff35585c807b9b8b1ec73 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:17.513 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1gv 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1gv 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1gv 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a840343c8a0f87544ae42be3a6c01468 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sLc 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a840343c8a0f87544ae42be3a6c01468 1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a840343c8a0f87544ae42be3a6c01468 1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a840343c8a0f87544ae42be3a6c01468 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sLc 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sLc 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.sLc 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1d9944059af02aedc195abe9dda2b2d38b5acc57e7ff8b42 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IAs 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1d9944059af02aedc195abe9dda2b2d38b5acc57e7ff8b42 2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1d9944059af02aedc195abe9dda2b2d38b5acc57e7ff8b42 2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1d9944059af02aedc195abe9dda2b2d38b5acc57e7ff8b42 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IAs 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IAs 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IAs 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d01d109a0ff8c5e8768e594aaa771387daf6ecd181a9bca 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VT7 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d01d109a0ff8c5e8768e594aaa771387daf6ecd181a9bca 2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d01d109a0ff8c5e8768e594aaa771387daf6ecd181a9bca 2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d01d109a0ff8c5e8768e594aaa771387daf6ecd181a9bca 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VT7 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VT7 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.VT7 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a39fabc57ffa9907c26ff0f3d21e5fcd 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Lya 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a39fabc57ffa9907c26ff0f3d21e5fcd 1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a39fabc57ffa9907c26ff0f3d21e5fcd 1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a39fabc57ffa9907c26ff0f3d21e5fcd 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:17.772 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Lya 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Lya 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Lya 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4618e2a6c4f3c029590960420ef196ad3fcf69f10fc8cb5a89ee1ba4e7236d36 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FeL 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4618e2a6c4f3c029590960420ef196ad3fcf69f10fc8cb5a89ee1ba4e7236d36 3 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4618e2a6c4f3c029590960420ef196ad3fcf69f10fc8cb5a89ee1ba4e7236d36 3 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4618e2a6c4f3c029590960420ef196ad3fcf69f10fc8cb5a89ee1ba4e7236d36 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FeL 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FeL 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.FeL 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1635165 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1635165 ']' 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.031 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1635317 /var/tmp/host.sock 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1635317 ']' 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:18.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.290 19:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CBL 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CBL 00:19:18.290 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CBL 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1gv ]] 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gv 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gv 00:19:18.548 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gv 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.sLc 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.sLc 00:19:18.807 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.sLc 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IAs ]] 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAs 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAs 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAs 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.VT7 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.VT7 00:19:19.065 19:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.VT7 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Lya ]] 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lya 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lya 00:19:19.324 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lya 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FeL 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FeL 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FeL 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.583 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.842 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.101 00:19:20.101 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.101 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.101 19:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.360 { 00:19:20.360 "cntlid": 1, 00:19:20.360 "qid": 0, 00:19:20.360 "state": "enabled", 00:19:20.360 "thread": "nvmf_tgt_poll_group_000", 00:19:20.360 "listen_address": { 00:19:20.360 "trtype": "TCP", 00:19:20.360 "adrfam": "IPv4", 00:19:20.360 "traddr": "10.0.0.2", 00:19:20.360 "trsvcid": "4420" 00:19:20.360 }, 00:19:20.360 "peer_address": { 00:19:20.360 "trtype": "TCP", 00:19:20.360 "adrfam": "IPv4", 00:19:20.360 "traddr": "10.0.0.1", 00:19:20.360 "trsvcid": "46458" 00:19:20.360 }, 00:19:20.360 "auth": { 00:19:20.360 "state": "completed", 00:19:20.360 "digest": "sha256", 00:19:20.360 "dhgroup": "null" 00:19:20.360 } 00:19:20.360 } 00:19:20.360 ]' 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.360 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.618 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:21.186 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.187 19:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.445 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.445 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.704 { 00:19:21.704 "cntlid": 3, 00:19:21.704 "qid": 0, 00:19:21.704 "state": "enabled", 00:19:21.704 "thread": "nvmf_tgt_poll_group_000", 00:19:21.704 "listen_address": { 00:19:21.704 "trtype": "TCP", 00:19:21.704 "adrfam": "IPv4", 00:19:21.704 "traddr": "10.0.0.2", 00:19:21.704 "trsvcid": "4420" 00:19:21.704 }, 00:19:21.704 "peer_address": { 00:19:21.704 "trtype": "TCP", 00:19:21.704 "adrfam": "IPv4", 00:19:21.704 "traddr": "10.0.0.1", 00:19:21.704 "trsvcid": "46478" 00:19:21.704 }, 00:19:21.704 "auth": { 00:19:21.704 "state": "completed", 00:19:21.704 "digest": "sha256", 00:19:21.704 "dhgroup": "null" 00:19:21.704 } 00:19:21.704 } 00:19:21.704 ]' 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.704 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.963 19:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:22.531 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.531 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.531 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.531 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.789 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.048 00:19:23.048 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.048 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.048 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.307 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.307 19:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.307 { 00:19:23.307 "cntlid": 5, 00:19:23.307 "qid": 0, 00:19:23.307 "state": "enabled", 00:19:23.307 "thread": "nvmf_tgt_poll_group_000", 00:19:23.307 "listen_address": { 00:19:23.307 "trtype": "TCP", 00:19:23.307 "adrfam": "IPv4", 00:19:23.307 "traddr": "10.0.0.2", 00:19:23.307 "trsvcid": "4420" 00:19:23.307 }, 00:19:23.307 "peer_address": { 00:19:23.307 "trtype": "TCP", 00:19:23.307 "adrfam": "IPv4", 00:19:23.307 "traddr": "10.0.0.1", 00:19:23.307 "trsvcid": "46506" 00:19:23.307 }, 00:19:23.307 "auth": { 00:19:23.307 "state": "completed", 00:19:23.307 "digest": "sha256", 00:19:23.307 "dhgroup": "null" 00:19:23.307 } 00:19:23.307 } 00:19:23.307 ]' 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.307 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.566 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.133 19:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.392 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.651 00:19:24.651 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.651 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.651 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.910 { 00:19:24.910 "cntlid": 7, 00:19:24.910 "qid": 0, 00:19:24.910 "state": "enabled", 00:19:24.910 "thread": "nvmf_tgt_poll_group_000", 00:19:24.910 "listen_address": { 00:19:24.910 "trtype": "TCP", 00:19:24.910 "adrfam": "IPv4", 00:19:24.910 "traddr": "10.0.0.2", 00:19:24.910 "trsvcid": "4420" 00:19:24.910 }, 00:19:24.910 "peer_address": { 00:19:24.910 "trtype": "TCP", 00:19:24.910 "adrfam": "IPv4", 00:19:24.910 "traddr": "10.0.0.1", 00:19:24.910 "trsvcid": "46532" 00:19:24.910 }, 00:19:24.910 "auth": { 00:19:24.910 "state": "completed", 00:19:24.910 "digest": "sha256", 00:19:24.910 "dhgroup": "null" 00:19:24.910 } 00:19:24.910 } 00:19:24.910 ]' 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.910 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.169 19:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.735 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.994 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.994 19:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.253 { 00:19:26.253 "cntlid": 9, 00:19:26.253 "qid": 0, 00:19:26.253 "state": "enabled", 00:19:26.253 "thread": "nvmf_tgt_poll_group_000", 00:19:26.253 "listen_address": { 00:19:26.253 "trtype": "TCP", 00:19:26.253 "adrfam": "IPv4", 00:19:26.253 "traddr": "10.0.0.2", 00:19:26.253 "trsvcid": "4420" 00:19:26.253 }, 00:19:26.253 "peer_address": { 00:19:26.253 "trtype": "TCP", 00:19:26.253 "adrfam": "IPv4", 00:19:26.253 "traddr": "10.0.0.1", 00:19:26.253 "trsvcid": "58320" 00:19:26.253 }, 00:19:26.253 "auth": { 00:19:26.253 "state": "completed", 00:19:26.253 "digest": "sha256", 00:19:26.253 "dhgroup": "ffdhe2048" 00:19:26.253 } 00:19:26.253 } 00:19:26.253 ]' 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.253 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.512 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.079 19:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.338 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.596 00:19:27.596 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.596 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.596 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.855 { 00:19:27.855 "cntlid": 11, 00:19:27.855 "qid": 0, 00:19:27.855 "state": "enabled", 00:19:27.855 "thread": "nvmf_tgt_poll_group_000", 00:19:27.855 "listen_address": { 00:19:27.855 "trtype": "TCP", 00:19:27.855 "adrfam": "IPv4", 00:19:27.855 "traddr": "10.0.0.2", 00:19:27.855 "trsvcid": "4420" 00:19:27.855 }, 00:19:27.855 "peer_address": { 00:19:27.855 "trtype": "TCP", 00:19:27.855 "adrfam": "IPv4", 00:19:27.855 "traddr": "10.0.0.1", 00:19:27.855 "trsvcid": "58354" 00:19:27.855 }, 00:19:27.855 "auth": { 00:19:27.855 "state": "completed", 00:19:27.855 "digest": "sha256", 00:19:27.855 "dhgroup": "ffdhe2048" 00:19:27.855 } 00:19:27.855 } 00:19:27.855 ]' 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.855 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.113 19:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.680 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.941 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.941 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.239 { 00:19:29.239 "cntlid": 13, 00:19:29.239 "qid": 0, 00:19:29.239 "state": "enabled", 00:19:29.239 "thread": "nvmf_tgt_poll_group_000", 00:19:29.239 "listen_address": { 00:19:29.239 "trtype": "TCP", 00:19:29.239 "adrfam": "IPv4", 00:19:29.239 "traddr": "10.0.0.2", 00:19:29.239 "trsvcid": "4420" 00:19:29.239 }, 00:19:29.239 "peer_address": { 00:19:29.239 "trtype": "TCP", 00:19:29.239 "adrfam": "IPv4", 00:19:29.239 "traddr": "10.0.0.1", 00:19:29.239 "trsvcid": "58382" 00:19:29.239 }, 00:19:29.239 "auth": { 00:19:29.239 "state": "completed", 00:19:29.239 "digest": "sha256", 00:19:29.239 "dhgroup": "ffdhe2048" 00:19:29.239 } 00:19:29.239 } 00:19:29.239 ]' 00:19:29.239 19:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.239 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.239 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.239 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.239 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.498 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.498 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.498 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.498 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:30.064 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.064 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.064 19:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.065 19:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.065 19:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.065 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.065 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.065 19:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.323 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.582 00:19:30.582 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.582 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.582 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.840 { 00:19:30.840 "cntlid": 15, 00:19:30.840 "qid": 0, 00:19:30.840 "state": "enabled", 00:19:30.840 "thread": "nvmf_tgt_poll_group_000", 00:19:30.840 "listen_address": { 00:19:30.840 "trtype": "TCP", 00:19:30.840 "adrfam": "IPv4", 00:19:30.840 "traddr": "10.0.0.2", 00:19:30.840 "trsvcid": "4420" 00:19:30.840 }, 00:19:30.840 "peer_address": { 00:19:30.840 "trtype": "TCP", 00:19:30.840 "adrfam": "IPv4", 00:19:30.840 "traddr": "10.0.0.1", 00:19:30.840 "trsvcid": "58406" 00:19:30.840 }, 00:19:30.840 "auth": { 00:19:30.840 "state": "completed", 00:19:30.840 "digest": "sha256", 00:19:30.840 "dhgroup": "ffdhe2048" 00:19:30.840 } 00:19:30.840 } 00:19:30.840 ]' 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.840 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.098 19:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.665 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.923 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.923 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.181 { 00:19:32.181 "cntlid": 17, 00:19:32.181 "qid": 0, 00:19:32.181 "state": "enabled", 00:19:32.181 "thread": "nvmf_tgt_poll_group_000", 00:19:32.181 "listen_address": { 00:19:32.181 "trtype": "TCP", 00:19:32.181 "adrfam": "IPv4", 00:19:32.181 "traddr": "10.0.0.2", 00:19:32.181 "trsvcid": "4420" 00:19:32.181 }, 00:19:32.181 "peer_address": { 00:19:32.181 "trtype": "TCP", 00:19:32.181 "adrfam": "IPv4", 00:19:32.181 "traddr": "10.0.0.1", 00:19:32.181 "trsvcid": "58434" 00:19:32.181 }, 00:19:32.181 "auth": { 00:19:32.181 "state": "completed", 00:19:32.181 "digest": "sha256", 00:19:32.181 "dhgroup": "ffdhe3072" 00:19:32.181 } 00:19:32.181 } 00:19:32.181 ]' 00:19:32.181 19:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.181 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.181 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.439 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.005 19:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.264 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.522 00:19:33.522 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.522 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.522 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.780 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.781 { 00:19:33.781 "cntlid": 19, 00:19:33.781 "qid": 0, 00:19:33.781 "state": "enabled", 00:19:33.781 "thread": "nvmf_tgt_poll_group_000", 00:19:33.781 "listen_address": { 00:19:33.781 "trtype": "TCP", 00:19:33.781 "adrfam": "IPv4", 00:19:33.781 "traddr": "10.0.0.2", 00:19:33.781 "trsvcid": "4420" 00:19:33.781 }, 00:19:33.781 "peer_address": { 00:19:33.781 "trtype": "TCP", 00:19:33.781 "adrfam": "IPv4", 00:19:33.781 "traddr": "10.0.0.1", 00:19:33.781 "trsvcid": "58472" 00:19:33.781 }, 00:19:33.781 "auth": { 00:19:33.781 "state": "completed", 00:19:33.781 "digest": "sha256", 00:19:33.781 "dhgroup": "ffdhe3072" 00:19:33.781 } 00:19:33.781 } 00:19:33.781 ]' 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.781 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.039 19:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.606 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.864 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.122 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.122 { 00:19:35.122 "cntlid": 21, 00:19:35.122 "qid": 0, 00:19:35.122 "state": "enabled", 00:19:35.122 "thread": "nvmf_tgt_poll_group_000", 00:19:35.122 "listen_address": { 00:19:35.122 "trtype": "TCP", 00:19:35.122 "adrfam": "IPv4", 00:19:35.122 "traddr": "10.0.0.2", 00:19:35.122 "trsvcid": "4420" 00:19:35.122 }, 00:19:35.122 "peer_address": { 00:19:35.122 "trtype": "TCP", 00:19:35.122 "adrfam": "IPv4", 00:19:35.122 "traddr": "10.0.0.1", 00:19:35.122 "trsvcid": "58494" 00:19:35.122 }, 00:19:35.122 "auth": { 00:19:35.122 "state": "completed", 00:19:35.122 "digest": "sha256", 00:19:35.122 "dhgroup": "ffdhe3072" 00:19:35.122 } 00:19:35.122 } 00:19:35.122 ]' 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.122 19:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.380 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:35.946 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.947 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.947 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.947 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.205 19:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.464 00:19:36.464 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.464 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.464 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.723 { 00:19:36.723 "cntlid": 23, 00:19:36.723 "qid": 0, 00:19:36.723 "state": "enabled", 00:19:36.723 "thread": "nvmf_tgt_poll_group_000", 00:19:36.723 "listen_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.2", 00:19:36.723 "trsvcid": "4420" 00:19:36.723 }, 00:19:36.723 "peer_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.1", 00:19:36.723 "trsvcid": "33716" 00:19:36.723 }, 00:19:36.723 "auth": { 00:19:36.723 "state": "completed", 00:19:36.723 "digest": "sha256", 00:19:36.723 "dhgroup": "ffdhe3072" 00:19:36.723 } 00:19:36.723 } 00:19:36.723 ]' 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.723 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.981 19:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.548 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.806 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.064 00:19:38.064 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.064 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.064 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.322 { 00:19:38.322 "cntlid": 25, 00:19:38.322 "qid": 0, 00:19:38.322 "state": "enabled", 00:19:38.322 "thread": "nvmf_tgt_poll_group_000", 00:19:38.322 "listen_address": { 00:19:38.322 "trtype": "TCP", 00:19:38.322 "adrfam": "IPv4", 00:19:38.322 "traddr": "10.0.0.2", 00:19:38.322 "trsvcid": "4420" 00:19:38.322 }, 00:19:38.322 "peer_address": { 00:19:38.322 "trtype": "TCP", 00:19:38.322 "adrfam": "IPv4", 00:19:38.322 "traddr": "10.0.0.1", 00:19:38.322 "trsvcid": "33758" 00:19:38.322 }, 00:19:38.322 "auth": { 00:19:38.322 "state": "completed", 00:19:38.322 "digest": "sha256", 00:19:38.322 "dhgroup": "ffdhe4096" 00:19:38.322 } 00:19:38.322 } 00:19:38.322 ]' 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.322 19:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.322 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.322 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.322 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.322 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.322 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.581 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.148 19:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.406 00:19:39.406 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.406 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.406 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.665 { 00:19:39.665 "cntlid": 27, 00:19:39.665 "qid": 0, 00:19:39.665 "state": "enabled", 00:19:39.665 "thread": "nvmf_tgt_poll_group_000", 00:19:39.665 "listen_address": { 00:19:39.665 "trtype": "TCP", 00:19:39.665 "adrfam": "IPv4", 00:19:39.665 "traddr": "10.0.0.2", 00:19:39.665 "trsvcid": "4420" 00:19:39.665 }, 00:19:39.665 "peer_address": { 00:19:39.665 "trtype": "TCP", 00:19:39.665 "adrfam": "IPv4", 00:19:39.665 "traddr": "10.0.0.1", 00:19:39.665 "trsvcid": "33780" 00:19:39.665 }, 00:19:39.665 "auth": { 00:19:39.665 "state": "completed", 00:19:39.665 "digest": "sha256", 00:19:39.665 "dhgroup": "ffdhe4096" 00:19:39.665 } 00:19:39.665 } 00:19:39.665 ]' 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.665 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.923 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.923 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.923 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.923 19:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.490 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.749 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.008 00:19:41.008 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.008 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.009 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.268 { 00:19:41.268 "cntlid": 29, 00:19:41.268 "qid": 0, 00:19:41.268 "state": "enabled", 00:19:41.268 "thread": "nvmf_tgt_poll_group_000", 00:19:41.268 "listen_address": { 00:19:41.268 "trtype": "TCP", 00:19:41.268 "adrfam": "IPv4", 00:19:41.268 "traddr": "10.0.0.2", 00:19:41.268 "trsvcid": "4420" 00:19:41.268 }, 00:19:41.268 "peer_address": { 00:19:41.268 "trtype": "TCP", 00:19:41.268 "adrfam": "IPv4", 00:19:41.268 "traddr": "10.0.0.1", 00:19:41.268 "trsvcid": "33808" 00:19:41.268 }, 00:19:41.268 "auth": { 00:19:41.268 "state": "completed", 00:19:41.268 "digest": "sha256", 00:19:41.268 "dhgroup": "ffdhe4096" 00:19:41.268 } 00:19:41.268 } 00:19:41.268 ]' 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.268 19:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.268 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.268 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.268 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.268 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.268 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.525 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.091 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.350 19:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.608 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.608 { 00:19:42.608 "cntlid": 31, 00:19:42.608 "qid": 0, 00:19:42.608 "state": "enabled", 00:19:42.608 "thread": "nvmf_tgt_poll_group_000", 00:19:42.608 "listen_address": { 00:19:42.608 "trtype": "TCP", 00:19:42.608 "adrfam": "IPv4", 00:19:42.608 "traddr": "10.0.0.2", 00:19:42.608 "trsvcid": "4420" 00:19:42.608 }, 00:19:42.608 "peer_address": { 00:19:42.608 "trtype": "TCP", 00:19:42.608 "adrfam": "IPv4", 00:19:42.608 "traddr": "10.0.0.1", 00:19:42.608 "trsvcid": "33838" 00:19:42.608 }, 00:19:42.608 "auth": { 00:19:42.608 "state": "completed", 00:19:42.608 "digest": "sha256", 00:19:42.608 "dhgroup": "ffdhe4096" 00:19:42.608 } 00:19:42.608 } 00:19:42.608 ]' 00:19:42.608 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.866 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.125 19:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.690 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.691 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.256 00:19:44.256 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.256 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.256 19:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.256 { 00:19:44.256 "cntlid": 33, 00:19:44.256 "qid": 0, 00:19:44.256 "state": "enabled", 00:19:44.256 "thread": "nvmf_tgt_poll_group_000", 00:19:44.256 "listen_address": { 00:19:44.256 "trtype": "TCP", 00:19:44.256 "adrfam": "IPv4", 00:19:44.256 "traddr": "10.0.0.2", 00:19:44.256 "trsvcid": "4420" 00:19:44.256 }, 00:19:44.256 "peer_address": { 00:19:44.256 "trtype": "TCP", 00:19:44.256 "adrfam": "IPv4", 00:19:44.256 "traddr": "10.0.0.1", 00:19:44.256 "trsvcid": "33848" 00:19:44.256 }, 00:19:44.256 "auth": { 00:19:44.256 "state": "completed", 00:19:44.256 "digest": "sha256", 00:19:44.256 "dhgroup": "ffdhe6144" 00:19:44.256 } 00:19:44.256 } 00:19:44.256 ]' 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.256 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.514 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:45.079 19:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.336 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.593 00:19:45.593 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.593 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.593 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.886 { 00:19:45.886 "cntlid": 35, 00:19:45.886 "qid": 0, 00:19:45.886 "state": "enabled", 00:19:45.886 "thread": "nvmf_tgt_poll_group_000", 00:19:45.886 "listen_address": { 00:19:45.886 "trtype": "TCP", 00:19:45.886 "adrfam": "IPv4", 00:19:45.886 "traddr": "10.0.0.2", 00:19:45.886 "trsvcid": "4420" 00:19:45.886 }, 00:19:45.886 "peer_address": { 00:19:45.886 "trtype": "TCP", 00:19:45.886 "adrfam": "IPv4", 00:19:45.886 "traddr": "10.0.0.1", 00:19:45.886 "trsvcid": "33866" 00:19:45.886 }, 00:19:45.886 "auth": { 00:19:45.886 "state": "completed", 00:19:45.886 "digest": "sha256", 00:19:45.886 "dhgroup": "ffdhe6144" 00:19:45.886 } 00:19:45.886 } 00:19:45.886 ]' 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.886 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.144 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.144 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.144 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.144 19:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.709 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.966 19:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.224 00:19:47.224 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.224 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.224 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.483 { 00:19:47.483 "cntlid": 37, 00:19:47.483 "qid": 0, 00:19:47.483 "state": "enabled", 00:19:47.483 "thread": "nvmf_tgt_poll_group_000", 00:19:47.483 "listen_address": { 00:19:47.483 "trtype": "TCP", 00:19:47.483 "adrfam": "IPv4", 00:19:47.483 "traddr": "10.0.0.2", 00:19:47.483 "trsvcid": "4420" 00:19:47.483 }, 00:19:47.483 "peer_address": { 00:19:47.483 "trtype": "TCP", 00:19:47.483 "adrfam": "IPv4", 00:19:47.483 "traddr": "10.0.0.1", 00:19:47.483 "trsvcid": "55762" 00:19:47.483 }, 00:19:47.483 "auth": { 00:19:47.483 "state": "completed", 00:19:47.483 "digest": "sha256", 00:19:47.483 "dhgroup": "ffdhe6144" 00:19:47.483 } 00:19:47.483 } 00:19:47.483 ]' 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.483 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.740 19:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:48.306 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.307 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.565 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.824 00:19:48.824 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.824 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.824 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.083 { 00:19:49.083 "cntlid": 39, 00:19:49.083 "qid": 0, 00:19:49.083 "state": "enabled", 00:19:49.083 "thread": "nvmf_tgt_poll_group_000", 00:19:49.083 "listen_address": { 00:19:49.083 "trtype": "TCP", 00:19:49.083 "adrfam": "IPv4", 00:19:49.083 "traddr": "10.0.0.2", 00:19:49.083 "trsvcid": "4420" 00:19:49.083 }, 00:19:49.083 "peer_address": { 00:19:49.083 "trtype": "TCP", 00:19:49.083 "adrfam": "IPv4", 00:19:49.083 "traddr": "10.0.0.1", 00:19:49.083 "trsvcid": "55788" 00:19:49.083 }, 00:19:49.083 "auth": { 00:19:49.083 "state": "completed", 00:19:49.083 "digest": "sha256", 00:19:49.083 "dhgroup": "ffdhe6144" 00:19:49.083 } 00:19:49.083 } 00:19:49.083 ]' 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.083 19:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.342 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.907 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.164 19:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.730 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.730 19:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.987 19:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.987 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.987 { 00:19:50.987 "cntlid": 41, 00:19:50.987 "qid": 0, 00:19:50.987 "state": "enabled", 00:19:50.988 "thread": "nvmf_tgt_poll_group_000", 00:19:50.988 "listen_address": { 00:19:50.988 "trtype": "TCP", 00:19:50.988 "adrfam": "IPv4", 00:19:50.988 "traddr": "10.0.0.2", 00:19:50.988 "trsvcid": "4420" 00:19:50.988 }, 00:19:50.988 "peer_address": { 00:19:50.988 "trtype": "TCP", 00:19:50.988 "adrfam": "IPv4", 00:19:50.988 "traddr": "10.0.0.1", 00:19:50.988 "trsvcid": "55812" 00:19:50.988 }, 00:19:50.988 "auth": { 00:19:50.988 "state": "completed", 00:19:50.988 "digest": "sha256", 00:19:50.988 "dhgroup": "ffdhe8192" 00:19:50.988 } 00:19:50.988 } 00:19:50.988 ]' 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.988 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.245 19:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.812 19:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.378 00:19:52.378 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.378 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.378 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.649 { 00:19:52.649 "cntlid": 43, 00:19:52.649 "qid": 0, 00:19:52.649 "state": "enabled", 00:19:52.649 "thread": "nvmf_tgt_poll_group_000", 00:19:52.649 "listen_address": { 00:19:52.649 "trtype": "TCP", 00:19:52.649 "adrfam": "IPv4", 00:19:52.649 "traddr": "10.0.0.2", 00:19:52.649 "trsvcid": "4420" 00:19:52.649 }, 00:19:52.649 "peer_address": { 00:19:52.649 "trtype": "TCP", 00:19:52.649 "adrfam": "IPv4", 00:19:52.649 "traddr": "10.0.0.1", 00:19:52.649 "trsvcid": "55854" 00:19:52.649 }, 00:19:52.649 "auth": { 00:19:52.649 "state": "completed", 00:19:52.649 "digest": "sha256", 00:19:52.649 "dhgroup": "ffdhe8192" 00:19:52.649 } 00:19:52.649 } 00:19:52.649 ]' 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.649 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.908 19:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.474 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.040 00:19:54.040 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.040 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.040 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.299 { 00:19:54.299 "cntlid": 45, 00:19:54.299 "qid": 0, 00:19:54.299 "state": "enabled", 00:19:54.299 "thread": "nvmf_tgt_poll_group_000", 00:19:54.299 "listen_address": { 00:19:54.299 "trtype": "TCP", 00:19:54.299 "adrfam": "IPv4", 00:19:54.299 "traddr": "10.0.0.2", 00:19:54.299 "trsvcid": "4420" 00:19:54.299 }, 00:19:54.299 "peer_address": { 00:19:54.299 "trtype": "TCP", 00:19:54.299 "adrfam": "IPv4", 00:19:54.299 "traddr": "10.0.0.1", 00:19:54.299 "trsvcid": "55880" 00:19:54.299 }, 00:19:54.299 "auth": { 00:19:54.299 "state": "completed", 00:19:54.299 "digest": "sha256", 00:19:54.299 "dhgroup": "ffdhe8192" 00:19:54.299 } 00:19:54.299 } 00:19:54.299 ]' 00:19:54.299 19:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.299 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.556 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.123 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.381 19:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.381 19:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.381 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.381 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.639 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.898 { 00:19:55.898 "cntlid": 47, 00:19:55.898 "qid": 0, 00:19:55.898 "state": "enabled", 00:19:55.898 "thread": "nvmf_tgt_poll_group_000", 00:19:55.898 "listen_address": { 00:19:55.898 "trtype": "TCP", 00:19:55.898 "adrfam": "IPv4", 00:19:55.898 "traddr": "10.0.0.2", 00:19:55.898 "trsvcid": "4420" 00:19:55.898 }, 00:19:55.898 "peer_address": { 00:19:55.898 "trtype": "TCP", 00:19:55.898 "adrfam": "IPv4", 00:19:55.898 "traddr": "10.0.0.1", 00:19:55.898 "trsvcid": "55898" 00:19:55.898 }, 00:19:55.898 "auth": { 00:19:55.898 "state": "completed", 00:19:55.898 "digest": "sha256", 00:19:55.898 "dhgroup": "ffdhe8192" 00:19:55.898 } 00:19:55.898 } 00:19:55.898 ]' 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.898 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.156 19:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:19:56.722 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.722 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:56.722 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.722 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.980 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.981 19:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.239 00:19:57.239 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.239 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.239 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.498 { 00:19:57.498 "cntlid": 49, 00:19:57.498 "qid": 0, 00:19:57.498 "state": "enabled", 00:19:57.498 "thread": "nvmf_tgt_poll_group_000", 00:19:57.498 "listen_address": { 00:19:57.498 "trtype": "TCP", 00:19:57.498 "adrfam": "IPv4", 00:19:57.498 "traddr": "10.0.0.2", 00:19:57.498 "trsvcid": "4420" 00:19:57.498 }, 00:19:57.498 "peer_address": { 00:19:57.498 "trtype": "TCP", 00:19:57.498 "adrfam": "IPv4", 00:19:57.498 "traddr": "10.0.0.1", 00:19:57.498 "trsvcid": "35298" 00:19:57.498 }, 00:19:57.498 "auth": { 00:19:57.498 "state": "completed", 00:19:57.498 "digest": "sha384", 00:19:57.498 "dhgroup": "null" 00:19:57.498 } 00:19:57.498 } 00:19:57.498 ]' 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.498 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.757 19:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.324 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.583 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.840 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.840 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.840 { 00:19:58.840 "cntlid": 51, 00:19:58.840 "qid": 0, 00:19:58.841 "state": "enabled", 00:19:58.841 "thread": "nvmf_tgt_poll_group_000", 00:19:58.841 "listen_address": { 00:19:58.841 "trtype": "TCP", 00:19:58.841 "adrfam": "IPv4", 00:19:58.841 "traddr": "10.0.0.2", 00:19:58.841 "trsvcid": "4420" 00:19:58.841 }, 00:19:58.841 "peer_address": { 00:19:58.841 "trtype": "TCP", 00:19:58.841 "adrfam": "IPv4", 00:19:58.841 "traddr": "10.0.0.1", 00:19:58.841 "trsvcid": "35322" 00:19:58.841 }, 00:19:58.841 "auth": { 00:19:58.841 "state": "completed", 00:19:58.841 "digest": "sha384", 00:19:58.841 "dhgroup": "null" 00:19:58.841 } 00:19:58.841 } 00:19:58.841 ]' 00:19:58.841 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.098 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.357 19:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.923 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.924 19:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.181 00:20:00.181 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.181 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.181 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.438 { 00:20:00.438 "cntlid": 53, 00:20:00.438 "qid": 0, 00:20:00.438 "state": "enabled", 00:20:00.438 "thread": "nvmf_tgt_poll_group_000", 00:20:00.438 "listen_address": { 00:20:00.438 "trtype": "TCP", 00:20:00.438 "adrfam": "IPv4", 00:20:00.438 "traddr": "10.0.0.2", 00:20:00.438 "trsvcid": "4420" 00:20:00.438 }, 00:20:00.438 "peer_address": { 00:20:00.438 "trtype": "TCP", 00:20:00.438 "adrfam": "IPv4", 00:20:00.438 "traddr": "10.0.0.1", 00:20:00.438 "trsvcid": "35358" 00:20:00.438 }, 00:20:00.438 "auth": { 00:20:00.438 "state": "completed", 00:20:00.438 "digest": "sha384", 00:20:00.438 "dhgroup": "null" 00:20:00.438 } 00:20:00.438 } 00:20:00.438 ]' 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.438 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.439 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.695 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.695 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.695 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.696 19:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.259 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.571 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.829 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.829 19:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.086 { 00:20:02.086 "cntlid": 55, 00:20:02.086 "qid": 0, 00:20:02.086 "state": "enabled", 00:20:02.086 "thread": "nvmf_tgt_poll_group_000", 00:20:02.086 "listen_address": { 00:20:02.086 "trtype": "TCP", 00:20:02.086 "adrfam": "IPv4", 00:20:02.086 "traddr": "10.0.0.2", 00:20:02.086 "trsvcid": "4420" 00:20:02.086 }, 00:20:02.086 "peer_address": { 00:20:02.086 "trtype": "TCP", 00:20:02.086 "adrfam": "IPv4", 00:20:02.086 "traddr": "10.0.0.1", 00:20:02.086 "trsvcid": "35380" 00:20:02.086 }, 00:20:02.086 "auth": { 00:20:02.086 "state": "completed", 00:20:02.086 "digest": "sha384", 00:20:02.086 "dhgroup": "null" 00:20:02.086 } 00:20:02.086 } 00:20:02.086 ]' 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.086 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.343 19:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.939 19:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.197 00:20:03.197 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.197 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.197 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.464 { 00:20:03.464 "cntlid": 57, 00:20:03.464 "qid": 0, 00:20:03.464 "state": "enabled", 00:20:03.464 "thread": "nvmf_tgt_poll_group_000", 00:20:03.464 "listen_address": { 00:20:03.464 "trtype": "TCP", 00:20:03.464 "adrfam": "IPv4", 00:20:03.464 "traddr": "10.0.0.2", 00:20:03.464 "trsvcid": "4420" 00:20:03.464 }, 00:20:03.464 "peer_address": { 00:20:03.464 "trtype": "TCP", 00:20:03.464 "adrfam": "IPv4", 00:20:03.464 "traddr": "10.0.0.1", 00:20:03.464 "trsvcid": "35402" 00:20:03.464 }, 00:20:03.464 "auth": { 00:20:03.464 "state": "completed", 00:20:03.464 "digest": "sha384", 00:20:03.464 "dhgroup": "ffdhe2048" 00:20:03.464 } 00:20:03.464 } 00:20:03.464 ]' 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.464 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.722 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.722 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.722 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.722 19:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.289 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.547 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.806 00:20:04.806 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.806 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.806 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.066 { 00:20:05.066 "cntlid": 59, 00:20:05.066 "qid": 0, 00:20:05.066 "state": "enabled", 00:20:05.066 "thread": "nvmf_tgt_poll_group_000", 00:20:05.066 "listen_address": { 00:20:05.066 "trtype": "TCP", 00:20:05.066 "adrfam": "IPv4", 00:20:05.066 "traddr": "10.0.0.2", 00:20:05.066 "trsvcid": "4420" 00:20:05.066 }, 00:20:05.066 "peer_address": { 00:20:05.066 "trtype": "TCP", 00:20:05.066 "adrfam": "IPv4", 00:20:05.066 "traddr": "10.0.0.1", 00:20:05.066 "trsvcid": "35432" 00:20:05.066 }, 00:20:05.066 "auth": { 00:20:05.066 "state": "completed", 00:20:05.066 "digest": "sha384", 00:20:05.066 "dhgroup": "ffdhe2048" 00:20:05.066 } 00:20:05.066 } 00:20:05.066 ]' 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.066 19:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.325 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.891 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.150 19:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.409 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.409 { 00:20:06.409 "cntlid": 61, 00:20:06.409 "qid": 0, 00:20:06.409 "state": "enabled", 00:20:06.409 "thread": "nvmf_tgt_poll_group_000", 00:20:06.409 "listen_address": { 00:20:06.409 "trtype": "TCP", 00:20:06.409 "adrfam": "IPv4", 00:20:06.409 "traddr": "10.0.0.2", 00:20:06.409 "trsvcid": "4420" 00:20:06.409 }, 00:20:06.409 "peer_address": { 00:20:06.409 "trtype": "TCP", 00:20:06.409 "adrfam": "IPv4", 00:20:06.409 "traddr": "10.0.0.1", 00:20:06.409 "trsvcid": "33212" 00:20:06.409 }, 00:20:06.409 "auth": { 00:20:06.409 "state": "completed", 00:20:06.409 "digest": "sha384", 00:20:06.409 "dhgroup": "ffdhe2048" 00:20:06.409 } 00:20:06.409 } 00:20:06.409 ]' 00:20:06.409 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.666 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.667 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.925 19:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.493 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.752 00:20:07.752 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.752 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.752 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.010 { 00:20:08.010 "cntlid": 63, 00:20:08.010 "qid": 0, 00:20:08.010 "state": "enabled", 00:20:08.010 "thread": "nvmf_tgt_poll_group_000", 00:20:08.010 "listen_address": { 00:20:08.010 "trtype": "TCP", 00:20:08.010 "adrfam": "IPv4", 00:20:08.010 "traddr": "10.0.0.2", 00:20:08.010 "trsvcid": "4420" 00:20:08.010 }, 00:20:08.010 "peer_address": { 00:20:08.010 "trtype": "TCP", 00:20:08.010 "adrfam": "IPv4", 00:20:08.010 "traddr": "10.0.0.1", 00:20:08.010 "trsvcid": "33238" 00:20:08.010 }, 00:20:08.010 "auth": { 00:20:08.010 "state": "completed", 00:20:08.010 "digest": "sha384", 00:20:08.010 "dhgroup": "ffdhe2048" 00:20:08.010 } 00:20:08.010 } 00:20:08.010 ]' 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.010 19:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.268 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.835 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.094 19:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.353 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.353 { 00:20:09.353 "cntlid": 65, 00:20:09.353 "qid": 0, 00:20:09.353 "state": "enabled", 00:20:09.353 "thread": "nvmf_tgt_poll_group_000", 00:20:09.353 "listen_address": { 00:20:09.353 "trtype": "TCP", 00:20:09.353 "adrfam": "IPv4", 00:20:09.353 "traddr": "10.0.0.2", 00:20:09.353 "trsvcid": "4420" 00:20:09.353 }, 00:20:09.353 "peer_address": { 00:20:09.353 "trtype": "TCP", 00:20:09.353 "adrfam": "IPv4", 00:20:09.353 "traddr": "10.0.0.1", 00:20:09.353 "trsvcid": "33264" 00:20:09.353 }, 00:20:09.353 "auth": { 00:20:09.353 "state": "completed", 00:20:09.353 "digest": "sha384", 00:20:09.353 "dhgroup": "ffdhe3072" 00:20:09.353 } 00:20:09.353 } 00:20:09.353 ]' 00:20:09.353 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.612 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.871 19:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.441 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.700 00:20:10.700 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.700 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.700 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.959 { 00:20:10.959 "cntlid": 67, 00:20:10.959 "qid": 0, 00:20:10.959 "state": "enabled", 00:20:10.959 "thread": "nvmf_tgt_poll_group_000", 00:20:10.959 "listen_address": { 00:20:10.959 "trtype": "TCP", 00:20:10.959 "adrfam": "IPv4", 00:20:10.959 "traddr": "10.0.0.2", 00:20:10.959 "trsvcid": "4420" 00:20:10.959 }, 00:20:10.959 "peer_address": { 00:20:10.959 "trtype": "TCP", 00:20:10.959 "adrfam": "IPv4", 00:20:10.959 "traddr": "10.0.0.1", 00:20:10.959 "trsvcid": "33292" 00:20:10.959 }, 00:20:10.959 "auth": { 00:20:10.959 "state": "completed", 00:20:10.959 "digest": "sha384", 00:20:10.959 "dhgroup": "ffdhe3072" 00:20:10.959 } 00:20:10.959 } 00:20:10.959 ]' 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.959 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.218 19:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.785 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.043 19:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.301 00:20:12.301 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.301 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.301 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.559 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.559 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.559 19:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.559 19:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.560 { 00:20:12.560 "cntlid": 69, 00:20:12.560 "qid": 0, 00:20:12.560 "state": "enabled", 00:20:12.560 "thread": "nvmf_tgt_poll_group_000", 00:20:12.560 "listen_address": { 00:20:12.560 "trtype": "TCP", 00:20:12.560 "adrfam": "IPv4", 00:20:12.560 "traddr": "10.0.0.2", 00:20:12.560 "trsvcid": "4420" 00:20:12.560 }, 00:20:12.560 "peer_address": { 00:20:12.560 "trtype": "TCP", 00:20:12.560 "adrfam": "IPv4", 00:20:12.560 "traddr": "10.0.0.1", 00:20:12.560 "trsvcid": "33324" 00:20:12.560 }, 00:20:12.560 "auth": { 00:20:12.560 "state": "completed", 00:20:12.560 "digest": "sha384", 00:20:12.560 "dhgroup": "ffdhe3072" 00:20:12.560 } 00:20:12.560 } 00:20:12.560 ]' 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.560 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.816 19:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.383 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.641 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.641 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.900 { 00:20:13.900 "cntlid": 71, 00:20:13.900 "qid": 0, 00:20:13.900 "state": "enabled", 00:20:13.900 "thread": "nvmf_tgt_poll_group_000", 00:20:13.900 "listen_address": { 00:20:13.900 "trtype": "TCP", 00:20:13.900 "adrfam": "IPv4", 00:20:13.900 "traddr": "10.0.0.2", 00:20:13.900 "trsvcid": "4420" 00:20:13.900 }, 00:20:13.900 "peer_address": { 00:20:13.900 "trtype": "TCP", 00:20:13.900 "adrfam": "IPv4", 00:20:13.900 "traddr": "10.0.0.1", 00:20:13.900 "trsvcid": "33342" 00:20:13.900 }, 00:20:13.900 "auth": { 00:20:13.900 "state": "completed", 00:20:13.900 "digest": "sha384", 00:20:13.900 "dhgroup": "ffdhe3072" 00:20:13.900 } 00:20:13.900 } 00:20:13.900 ]' 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.900 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.159 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.159 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.159 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.159 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.159 19:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.159 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.725 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.726 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.984 19:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.242 00:20:15.242 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.242 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.242 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.500 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.500 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.501 { 00:20:15.501 "cntlid": 73, 00:20:15.501 "qid": 0, 00:20:15.501 "state": "enabled", 00:20:15.501 "thread": "nvmf_tgt_poll_group_000", 00:20:15.501 "listen_address": { 00:20:15.501 "trtype": "TCP", 00:20:15.501 "adrfam": "IPv4", 00:20:15.501 "traddr": "10.0.0.2", 00:20:15.501 "trsvcid": "4420" 00:20:15.501 }, 00:20:15.501 "peer_address": { 00:20:15.501 "trtype": "TCP", 00:20:15.501 "adrfam": "IPv4", 00:20:15.501 "traddr": "10.0.0.1", 00:20:15.501 "trsvcid": "33370" 00:20:15.501 }, 00:20:15.501 "auth": { 00:20:15.501 "state": "completed", 00:20:15.501 "digest": "sha384", 00:20:15.501 "dhgroup": "ffdhe4096" 00:20:15.501 } 00:20:15.501 } 00:20:15.501 ]' 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.501 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.759 19:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.326 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.584 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.842 00:20:16.842 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.842 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.842 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.842 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.100 { 00:20:17.100 "cntlid": 75, 00:20:17.100 "qid": 0, 00:20:17.100 "state": "enabled", 00:20:17.100 "thread": "nvmf_tgt_poll_group_000", 00:20:17.100 "listen_address": { 00:20:17.100 "trtype": "TCP", 00:20:17.100 "adrfam": "IPv4", 00:20:17.100 "traddr": "10.0.0.2", 00:20:17.100 "trsvcid": "4420" 00:20:17.100 }, 00:20:17.100 "peer_address": { 00:20:17.100 "trtype": "TCP", 00:20:17.100 "adrfam": "IPv4", 00:20:17.100 "traddr": "10.0.0.1", 00:20:17.100 "trsvcid": "48930" 00:20:17.100 }, 00:20:17.100 "auth": { 00:20:17.100 "state": "completed", 00:20:17.100 "digest": "sha384", 00:20:17.100 "dhgroup": "ffdhe4096" 00:20:17.100 } 00:20:17.100 } 00:20:17.100 ]' 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.100 19:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.357 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.920 19:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.177 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.433 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.433 { 00:20:18.433 "cntlid": 77, 00:20:18.433 "qid": 0, 00:20:18.433 "state": "enabled", 00:20:18.433 "thread": "nvmf_tgt_poll_group_000", 00:20:18.434 "listen_address": { 00:20:18.434 "trtype": "TCP", 00:20:18.434 "adrfam": "IPv4", 00:20:18.434 "traddr": "10.0.0.2", 00:20:18.434 "trsvcid": "4420" 00:20:18.434 }, 00:20:18.434 "peer_address": { 00:20:18.434 "trtype": "TCP", 00:20:18.434 "adrfam": "IPv4", 00:20:18.434 "traddr": "10.0.0.1", 00:20:18.434 "trsvcid": "48960" 00:20:18.434 }, 00:20:18.434 "auth": { 00:20:18.434 "state": "completed", 00:20:18.434 "digest": "sha384", 00:20:18.434 "dhgroup": "ffdhe4096" 00:20:18.434 } 00:20:18.434 } 00:20:18.434 ]' 00:20:18.434 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.434 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.434 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.690 19:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.278 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.553 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.811 00:20:19.811 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.811 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.811 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.070 { 00:20:20.070 "cntlid": 79, 00:20:20.070 "qid": 0, 00:20:20.070 "state": "enabled", 00:20:20.070 "thread": "nvmf_tgt_poll_group_000", 00:20:20.070 "listen_address": { 00:20:20.070 "trtype": "TCP", 00:20:20.070 "adrfam": "IPv4", 00:20:20.070 "traddr": "10.0.0.2", 00:20:20.070 "trsvcid": "4420" 00:20:20.070 }, 00:20:20.070 "peer_address": { 00:20:20.070 "trtype": "TCP", 00:20:20.070 "adrfam": "IPv4", 00:20:20.070 "traddr": "10.0.0.1", 00:20:20.070 "trsvcid": "48994" 00:20:20.070 }, 00:20:20.070 "auth": { 00:20:20.070 "state": "completed", 00:20:20.070 "digest": "sha384", 00:20:20.070 "dhgroup": "ffdhe4096" 00:20:20.070 } 00:20:20.070 } 00:20:20.070 ]' 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.070 19:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.328 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.895 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.154 19:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.413 00:20:21.413 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.413 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.413 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.671 { 00:20:21.671 "cntlid": 81, 00:20:21.671 "qid": 0, 00:20:21.671 "state": "enabled", 00:20:21.671 "thread": "nvmf_tgt_poll_group_000", 00:20:21.671 "listen_address": { 00:20:21.671 "trtype": "TCP", 00:20:21.671 "adrfam": "IPv4", 00:20:21.671 "traddr": "10.0.0.2", 00:20:21.671 "trsvcid": "4420" 00:20:21.671 }, 00:20:21.671 "peer_address": { 00:20:21.671 "trtype": "TCP", 00:20:21.671 "adrfam": "IPv4", 00:20:21.671 "traddr": "10.0.0.1", 00:20:21.671 "trsvcid": "49014" 00:20:21.671 }, 00:20:21.671 "auth": { 00:20:21.671 "state": "completed", 00:20:21.671 "digest": "sha384", 00:20:21.671 "dhgroup": "ffdhe6144" 00:20:21.671 } 00:20:21.671 } 00:20:21.671 ]' 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.671 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.930 19:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.498 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:22.756 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.757 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.015 00:20:23.015 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.015 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.015 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.273 { 00:20:23.273 "cntlid": 83, 00:20:23.273 "qid": 0, 00:20:23.273 "state": "enabled", 00:20:23.273 "thread": "nvmf_tgt_poll_group_000", 00:20:23.273 "listen_address": { 00:20:23.273 "trtype": "TCP", 00:20:23.273 "adrfam": "IPv4", 00:20:23.273 "traddr": "10.0.0.2", 00:20:23.273 "trsvcid": "4420" 00:20:23.273 }, 00:20:23.273 "peer_address": { 00:20:23.273 "trtype": "TCP", 00:20:23.273 "adrfam": "IPv4", 00:20:23.273 "traddr": "10.0.0.1", 00:20:23.273 "trsvcid": "49046" 00:20:23.273 }, 00:20:23.273 "auth": { 00:20:23.273 "state": "completed", 00:20:23.273 "digest": "sha384", 00:20:23.273 "dhgroup": "ffdhe6144" 00:20:23.273 } 00:20:23.273 } 00:20:23.273 ]' 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.273 19:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.273 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.274 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.274 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.532 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.099 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.358 19:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.615 00:20:24.615 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.615 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.615 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.873 { 00:20:24.873 "cntlid": 85, 00:20:24.873 "qid": 0, 00:20:24.873 "state": "enabled", 00:20:24.873 "thread": "nvmf_tgt_poll_group_000", 00:20:24.873 "listen_address": { 00:20:24.873 "trtype": "TCP", 00:20:24.873 "adrfam": "IPv4", 00:20:24.873 "traddr": "10.0.0.2", 00:20:24.873 "trsvcid": "4420" 00:20:24.873 }, 00:20:24.873 "peer_address": { 00:20:24.873 "trtype": "TCP", 00:20:24.873 "adrfam": "IPv4", 00:20:24.873 "traddr": "10.0.0.1", 00:20:24.873 "trsvcid": "49074" 00:20:24.873 }, 00:20:24.873 "auth": { 00:20:24.873 "state": "completed", 00:20:24.873 "digest": "sha384", 00:20:24.873 "dhgroup": "ffdhe6144" 00:20:24.873 } 00:20:24.873 } 00:20:24.873 ]' 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.873 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.132 19:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.699 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.957 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:25.957 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.957 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.957 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.957 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.958 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.216 00:20:26.216 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.216 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.216 19:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.475 { 00:20:26.475 "cntlid": 87, 00:20:26.475 "qid": 0, 00:20:26.475 "state": "enabled", 00:20:26.475 "thread": "nvmf_tgt_poll_group_000", 00:20:26.475 "listen_address": { 00:20:26.475 "trtype": "TCP", 00:20:26.475 "adrfam": "IPv4", 00:20:26.475 "traddr": "10.0.0.2", 00:20:26.475 "trsvcid": "4420" 00:20:26.475 }, 00:20:26.475 "peer_address": { 00:20:26.475 "trtype": "TCP", 00:20:26.475 "adrfam": "IPv4", 00:20:26.475 "traddr": "10.0.0.1", 00:20:26.475 "trsvcid": "49120" 00:20:26.475 }, 00:20:26.475 "auth": { 00:20:26.475 "state": "completed", 00:20:26.475 "digest": "sha384", 00:20:26.475 "dhgroup": "ffdhe6144" 00:20:26.475 } 00:20:26.475 } 00:20:26.475 ]' 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.475 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.733 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.299 19:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.558 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.816 00:20:27.816 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.816 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.816 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.075 { 00:20:28.075 "cntlid": 89, 00:20:28.075 "qid": 0, 00:20:28.075 "state": "enabled", 00:20:28.075 "thread": "nvmf_tgt_poll_group_000", 00:20:28.075 "listen_address": { 00:20:28.075 "trtype": "TCP", 00:20:28.075 "adrfam": "IPv4", 00:20:28.075 "traddr": "10.0.0.2", 00:20:28.075 "trsvcid": "4420" 00:20:28.075 }, 00:20:28.075 "peer_address": { 00:20:28.075 "trtype": "TCP", 00:20:28.075 "adrfam": "IPv4", 00:20:28.075 "traddr": "10.0.0.1", 00:20:28.075 "trsvcid": "57410" 00:20:28.075 }, 00:20:28.075 "auth": { 00:20:28.075 "state": "completed", 00:20:28.075 "digest": "sha384", 00:20:28.075 "dhgroup": "ffdhe8192" 00:20:28.075 } 00:20:28.075 } 00:20:28.075 ]' 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.075 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.332 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.332 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.332 19:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.332 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.899 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.167 19:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.732 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.732 19:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.990 { 00:20:29.990 "cntlid": 91, 00:20:29.990 "qid": 0, 00:20:29.990 "state": "enabled", 00:20:29.990 "thread": "nvmf_tgt_poll_group_000", 00:20:29.990 "listen_address": { 00:20:29.990 "trtype": "TCP", 00:20:29.990 "adrfam": "IPv4", 00:20:29.990 "traddr": "10.0.0.2", 00:20:29.990 "trsvcid": "4420" 00:20:29.990 }, 00:20:29.990 "peer_address": { 00:20:29.990 "trtype": "TCP", 00:20:29.990 "adrfam": "IPv4", 00:20:29.990 "traddr": "10.0.0.1", 00:20:29.990 "trsvcid": "57426" 00:20:29.990 }, 00:20:29.990 "auth": { 00:20:29.990 "state": "completed", 00:20:29.990 "digest": "sha384", 00:20:29.990 "dhgroup": "ffdhe8192" 00:20:29.990 } 00:20:29.990 } 00:20:29.990 ]' 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.990 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.248 19:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.813 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.070 19:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.328 00:20:31.328 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.328 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.328 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.586 { 00:20:31.586 "cntlid": 93, 00:20:31.586 "qid": 0, 00:20:31.586 "state": "enabled", 00:20:31.586 "thread": "nvmf_tgt_poll_group_000", 00:20:31.586 "listen_address": { 00:20:31.586 "trtype": "TCP", 00:20:31.586 "adrfam": "IPv4", 00:20:31.586 "traddr": "10.0.0.2", 00:20:31.586 "trsvcid": "4420" 00:20:31.586 }, 00:20:31.586 "peer_address": { 00:20:31.586 "trtype": "TCP", 00:20:31.586 "adrfam": "IPv4", 00:20:31.586 "traddr": "10.0.0.1", 00:20:31.586 "trsvcid": "57458" 00:20:31.586 }, 00:20:31.586 "auth": { 00:20:31.586 "state": "completed", 00:20:31.586 "digest": "sha384", 00:20:31.586 "dhgroup": "ffdhe8192" 00:20:31.586 } 00:20:31.586 } 00:20:31.586 ]' 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.586 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.845 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.845 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.845 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.845 19:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.411 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.667 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:32.667 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.667 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.667 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.668 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.234 00:20:33.234 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.234 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.234 19:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.234 { 00:20:33.234 "cntlid": 95, 00:20:33.234 "qid": 0, 00:20:33.234 "state": "enabled", 00:20:33.234 "thread": "nvmf_tgt_poll_group_000", 00:20:33.234 "listen_address": { 00:20:33.234 "trtype": "TCP", 00:20:33.234 "adrfam": "IPv4", 00:20:33.234 "traddr": "10.0.0.2", 00:20:33.234 "trsvcid": "4420" 00:20:33.234 }, 00:20:33.234 "peer_address": { 00:20:33.234 "trtype": "TCP", 00:20:33.234 "adrfam": "IPv4", 00:20:33.234 "traddr": "10.0.0.1", 00:20:33.234 "trsvcid": "57472" 00:20:33.234 }, 00:20:33.234 "auth": { 00:20:33.234 "state": "completed", 00:20:33.234 "digest": "sha384", 00:20:33.234 "dhgroup": "ffdhe8192" 00:20:33.234 } 00:20:33.234 } 00:20:33.234 ]' 00:20:33.234 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.492 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.750 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.318 19:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.318 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.576 00:20:34.576 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.576 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.576 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.834 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.835 { 00:20:34.835 "cntlid": 97, 00:20:34.835 "qid": 0, 00:20:34.835 "state": "enabled", 00:20:34.835 "thread": "nvmf_tgt_poll_group_000", 00:20:34.835 "listen_address": { 00:20:34.835 "trtype": "TCP", 00:20:34.835 "adrfam": "IPv4", 00:20:34.835 "traddr": "10.0.0.2", 00:20:34.835 "trsvcid": "4420" 00:20:34.835 }, 00:20:34.835 "peer_address": { 00:20:34.835 "trtype": "TCP", 00:20:34.835 "adrfam": "IPv4", 00:20:34.835 "traddr": "10.0.0.1", 00:20:34.835 "trsvcid": "57496" 00:20:34.835 }, 00:20:34.835 "auth": { 00:20:34.835 "state": "completed", 00:20:34.835 "digest": "sha512", 00:20:34.835 "dhgroup": "null" 00:20:34.835 } 00:20:34.835 } 00:20:34.835 ]' 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.835 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.093 19:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.659 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.980 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.239 00:20:36.239 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.239 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.239 19:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.239 { 00:20:36.239 "cntlid": 99, 00:20:36.239 "qid": 0, 00:20:36.239 "state": "enabled", 00:20:36.239 "thread": "nvmf_tgt_poll_group_000", 00:20:36.239 "listen_address": { 00:20:36.239 "trtype": "TCP", 00:20:36.239 "adrfam": "IPv4", 00:20:36.239 "traddr": "10.0.0.2", 00:20:36.239 "trsvcid": "4420" 00:20:36.239 }, 00:20:36.239 "peer_address": { 00:20:36.239 "trtype": "TCP", 00:20:36.239 "adrfam": "IPv4", 00:20:36.239 "traddr": "10.0.0.1", 00:20:36.239 "trsvcid": "44236" 00:20:36.239 }, 00:20:36.239 "auth": { 00:20:36.239 "state": "completed", 00:20:36.239 "digest": "sha512", 00:20:36.239 "dhgroup": "null" 00:20:36.239 } 00:20:36.239 } 00:20:36.239 ]' 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.239 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.498 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:37.065 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.066 19:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:37.324 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.325 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.583 00:20:37.583 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.583 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.583 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.842 { 00:20:37.842 "cntlid": 101, 00:20:37.842 "qid": 0, 00:20:37.842 "state": "enabled", 00:20:37.842 "thread": "nvmf_tgt_poll_group_000", 00:20:37.842 "listen_address": { 00:20:37.842 "trtype": "TCP", 00:20:37.842 "adrfam": "IPv4", 00:20:37.842 "traddr": "10.0.0.2", 00:20:37.842 "trsvcid": "4420" 00:20:37.842 }, 00:20:37.842 "peer_address": { 00:20:37.842 "trtype": "TCP", 00:20:37.842 "adrfam": "IPv4", 00:20:37.842 "traddr": "10.0.0.1", 00:20:37.842 "trsvcid": "44266" 00:20:37.842 }, 00:20:37.842 "auth": { 00:20:37.842 "state": "completed", 00:20:37.842 "digest": "sha512", 00:20:37.842 "dhgroup": "null" 00:20:37.842 } 00:20:37.842 } 00:20:37.842 ]' 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.842 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.101 19:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.667 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.926 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.184 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.184 19:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.184 19:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.184 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.184 { 00:20:39.184 "cntlid": 103, 00:20:39.184 "qid": 0, 00:20:39.184 "state": "enabled", 00:20:39.184 "thread": "nvmf_tgt_poll_group_000", 00:20:39.184 "listen_address": { 00:20:39.184 "trtype": "TCP", 00:20:39.184 "adrfam": "IPv4", 00:20:39.184 "traddr": "10.0.0.2", 00:20:39.184 "trsvcid": "4420" 00:20:39.184 }, 00:20:39.184 "peer_address": { 00:20:39.184 "trtype": "TCP", 00:20:39.184 "adrfam": "IPv4", 00:20:39.185 "traddr": "10.0.0.1", 00:20:39.185 "trsvcid": "44292" 00:20:39.185 }, 00:20:39.185 "auth": { 00:20:39.185 "state": "completed", 00:20:39.185 "digest": "sha512", 00:20:39.185 "dhgroup": "null" 00:20:39.185 } 00:20:39.185 } 00:20:39.185 ]' 00:20:39.185 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.443 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.702 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.270 19:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.270 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.529 00:20:40.529 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.529 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.529 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.787 { 00:20:40.787 "cntlid": 105, 00:20:40.787 "qid": 0, 00:20:40.787 "state": "enabled", 00:20:40.787 "thread": "nvmf_tgt_poll_group_000", 00:20:40.787 "listen_address": { 00:20:40.787 "trtype": "TCP", 00:20:40.787 "adrfam": "IPv4", 00:20:40.787 "traddr": "10.0.0.2", 00:20:40.787 "trsvcid": "4420" 00:20:40.787 }, 00:20:40.787 "peer_address": { 00:20:40.787 "trtype": "TCP", 00:20:40.787 "adrfam": "IPv4", 00:20:40.787 "traddr": "10.0.0.1", 00:20:40.787 "trsvcid": "44336" 00:20:40.787 }, 00:20:40.787 "auth": { 00:20:40.787 "state": "completed", 00:20:40.787 "digest": "sha512", 00:20:40.787 "dhgroup": "ffdhe2048" 00:20:40.787 } 00:20:40.787 } 00:20:40.787 ]' 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.787 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.046 19:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.612 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.871 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.131 00:20:42.131 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.131 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.131 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.390 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.390 19:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.390 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.390 19:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.390 { 00:20:42.390 "cntlid": 107, 00:20:42.390 "qid": 0, 00:20:42.390 "state": "enabled", 00:20:42.390 "thread": "nvmf_tgt_poll_group_000", 00:20:42.390 "listen_address": { 00:20:42.390 "trtype": "TCP", 00:20:42.390 "adrfam": "IPv4", 00:20:42.390 "traddr": "10.0.0.2", 00:20:42.390 "trsvcid": "4420" 00:20:42.390 }, 00:20:42.390 "peer_address": { 00:20:42.390 "trtype": "TCP", 00:20:42.390 "adrfam": "IPv4", 00:20:42.390 "traddr": "10.0.0.1", 00:20:42.390 "trsvcid": "44362" 00:20:42.390 }, 00:20:42.390 "auth": { 00:20:42.390 "state": "completed", 00:20:42.390 "digest": "sha512", 00:20:42.390 "dhgroup": "ffdhe2048" 00:20:42.390 } 00:20:42.390 } 00:20:42.390 ]' 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.390 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.648 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.215 19:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.215 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.474 00:20:43.474 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.474 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.474 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.733 { 00:20:43.733 "cntlid": 109, 00:20:43.733 "qid": 0, 00:20:43.733 "state": "enabled", 00:20:43.733 "thread": "nvmf_tgt_poll_group_000", 00:20:43.733 "listen_address": { 00:20:43.733 "trtype": "TCP", 00:20:43.733 "adrfam": "IPv4", 00:20:43.733 "traddr": "10.0.0.2", 00:20:43.733 "trsvcid": "4420" 00:20:43.733 }, 00:20:43.733 "peer_address": { 00:20:43.733 "trtype": "TCP", 00:20:43.733 "adrfam": "IPv4", 00:20:43.733 "traddr": "10.0.0.1", 00:20:43.733 "trsvcid": "44390" 00:20:43.733 }, 00:20:43.733 "auth": { 00:20:43.733 "state": "completed", 00:20:43.733 "digest": "sha512", 00:20:43.733 "dhgroup": "ffdhe2048" 00:20:43.733 } 00:20:43.733 } 00:20:43.733 ]' 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.733 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.991 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.991 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.991 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.991 19:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.557 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.815 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:44.815 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.815 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.815 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.815 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.816 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:45.073 00:20:45.073 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.073 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.073 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.330 { 00:20:45.330 "cntlid": 111, 00:20:45.330 "qid": 0, 00:20:45.330 "state": "enabled", 00:20:45.330 "thread": "nvmf_tgt_poll_group_000", 00:20:45.330 "listen_address": { 00:20:45.330 "trtype": "TCP", 00:20:45.330 "adrfam": "IPv4", 00:20:45.330 "traddr": "10.0.0.2", 00:20:45.330 "trsvcid": "4420" 00:20:45.330 }, 00:20:45.330 "peer_address": { 00:20:45.330 "trtype": "TCP", 00:20:45.330 "adrfam": "IPv4", 00:20:45.330 "traddr": "10.0.0.1", 00:20:45.330 "trsvcid": "44422" 00:20:45.330 }, 00:20:45.330 "auth": { 00:20:45.330 "state": "completed", 00:20:45.330 "digest": "sha512", 00:20:45.330 "dhgroup": "ffdhe2048" 00:20:45.330 } 00:20:45.330 } 00:20:45.330 ]' 00:20:45.330 19:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.330 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.587 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.152 19:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.409 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.666 00:20:46.666 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.666 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.666 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.924 { 00:20:46.924 "cntlid": 113, 00:20:46.924 "qid": 0, 00:20:46.924 "state": "enabled", 00:20:46.924 "thread": "nvmf_tgt_poll_group_000", 00:20:46.924 "listen_address": { 00:20:46.924 "trtype": "TCP", 00:20:46.924 "adrfam": "IPv4", 00:20:46.924 "traddr": "10.0.0.2", 00:20:46.924 "trsvcid": "4420" 00:20:46.924 }, 00:20:46.924 "peer_address": { 00:20:46.924 "trtype": "TCP", 00:20:46.924 "adrfam": "IPv4", 00:20:46.924 "traddr": "10.0.0.1", 00:20:46.924 "trsvcid": "48528" 00:20:46.924 }, 00:20:46.924 "auth": { 00:20:46.924 "state": "completed", 00:20:46.924 "digest": "sha512", 00:20:46.924 "dhgroup": "ffdhe3072" 00:20:46.924 } 00:20:46.924 } 00:20:46.924 ]' 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.924 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.182 19:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.750 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.008 00:20:48.008 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.008 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.008 19:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.267 { 00:20:48.267 "cntlid": 115, 00:20:48.267 "qid": 0, 00:20:48.267 "state": "enabled", 00:20:48.267 "thread": "nvmf_tgt_poll_group_000", 00:20:48.267 "listen_address": { 00:20:48.267 "trtype": "TCP", 00:20:48.267 "adrfam": "IPv4", 00:20:48.267 "traddr": "10.0.0.2", 00:20:48.267 "trsvcid": "4420" 00:20:48.267 }, 00:20:48.267 "peer_address": { 00:20:48.267 "trtype": "TCP", 00:20:48.267 "adrfam": "IPv4", 00:20:48.267 "traddr": "10.0.0.1", 00:20:48.267 "trsvcid": "48546" 00:20:48.267 }, 00:20:48.267 "auth": { 00:20:48.267 "state": "completed", 00:20:48.267 "digest": "sha512", 00:20:48.267 "dhgroup": "ffdhe3072" 00:20:48.267 } 00:20:48.267 } 00:20:48.267 ]' 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.267 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.526 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.094 19:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.352 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.611 00:20:49.611 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.611 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.611 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.869 { 00:20:49.869 "cntlid": 117, 00:20:49.869 "qid": 0, 00:20:49.869 "state": "enabled", 00:20:49.869 "thread": "nvmf_tgt_poll_group_000", 00:20:49.869 "listen_address": { 00:20:49.869 "trtype": "TCP", 00:20:49.869 "adrfam": "IPv4", 00:20:49.869 "traddr": "10.0.0.2", 00:20:49.869 "trsvcid": "4420" 00:20:49.869 }, 00:20:49.869 "peer_address": { 00:20:49.869 "trtype": "TCP", 00:20:49.869 "adrfam": "IPv4", 00:20:49.869 "traddr": "10.0.0.1", 00:20:49.869 "trsvcid": "48570" 00:20:49.869 }, 00:20:49.869 "auth": { 00:20:49.869 "state": "completed", 00:20:49.869 "digest": "sha512", 00:20:49.869 "dhgroup": "ffdhe3072" 00:20:49.869 } 00:20:49.869 } 00:20:49.869 ]' 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.869 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.128 19:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.696 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.955 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.214 00:20:51.214 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.214 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.214 19:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.473 { 00:20:51.473 "cntlid": 119, 00:20:51.473 "qid": 0, 00:20:51.473 "state": "enabled", 00:20:51.473 "thread": "nvmf_tgt_poll_group_000", 00:20:51.473 "listen_address": { 00:20:51.473 "trtype": "TCP", 00:20:51.473 "adrfam": "IPv4", 00:20:51.473 "traddr": "10.0.0.2", 00:20:51.473 "trsvcid": "4420" 00:20:51.473 }, 00:20:51.473 "peer_address": { 00:20:51.473 "trtype": "TCP", 00:20:51.473 "adrfam": "IPv4", 00:20:51.473 "traddr": "10.0.0.1", 00:20:51.473 "trsvcid": "48596" 00:20:51.473 }, 00:20:51.473 "auth": { 00:20:51.473 "state": "completed", 00:20:51.473 "digest": "sha512", 00:20:51.473 "dhgroup": "ffdhe3072" 00:20:51.473 } 00:20:51.473 } 00:20:51.473 ]' 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.473 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.733 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.302 19:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.302 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.571 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.571 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.571 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.862 { 00:20:52.862 "cntlid": 121, 00:20:52.862 "qid": 0, 00:20:52.862 "state": "enabled", 00:20:52.862 "thread": "nvmf_tgt_poll_group_000", 00:20:52.862 "listen_address": { 00:20:52.862 "trtype": "TCP", 00:20:52.862 "adrfam": "IPv4", 00:20:52.862 "traddr": "10.0.0.2", 00:20:52.862 "trsvcid": "4420" 00:20:52.862 }, 00:20:52.862 "peer_address": { 00:20:52.862 "trtype": "TCP", 00:20:52.862 "adrfam": "IPv4", 00:20:52.862 "traddr": "10.0.0.1", 00:20:52.862 "trsvcid": "48634" 00:20:52.862 }, 00:20:52.862 "auth": { 00:20:52.862 "state": "completed", 00:20:52.862 "digest": "sha512", 00:20:52.862 "dhgroup": "ffdhe4096" 00:20:52.862 } 00:20:52.862 } 00:20:52.862 ]' 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.862 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.121 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.121 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.121 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.121 19:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.686 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.945 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:53.945 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.945 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.945 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:53.945 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.946 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.204 00:20:54.204 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.204 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.204 19:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.479 { 00:20:54.479 "cntlid": 123, 00:20:54.479 "qid": 0, 00:20:54.479 "state": "enabled", 00:20:54.479 "thread": "nvmf_tgt_poll_group_000", 00:20:54.479 "listen_address": { 00:20:54.479 "trtype": "TCP", 00:20:54.479 "adrfam": "IPv4", 00:20:54.479 "traddr": "10.0.0.2", 00:20:54.479 "trsvcid": "4420" 00:20:54.479 }, 00:20:54.479 "peer_address": { 00:20:54.479 "trtype": "TCP", 00:20:54.479 "adrfam": "IPv4", 00:20:54.479 "traddr": "10.0.0.1", 00:20:54.479 "trsvcid": "48672" 00:20:54.479 }, 00:20:54.479 "auth": { 00:20:54.479 "state": "completed", 00:20:54.479 "digest": "sha512", 00:20:54.479 "dhgroup": "ffdhe4096" 00:20:54.479 } 00:20:54.479 } 00:20:54.479 ]' 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.479 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.738 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.304 19:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.563 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.822 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.822 { 00:20:55.822 "cntlid": 125, 00:20:55.822 "qid": 0, 00:20:55.822 "state": "enabled", 00:20:55.822 "thread": "nvmf_tgt_poll_group_000", 00:20:55.822 "listen_address": { 00:20:55.822 "trtype": "TCP", 00:20:55.822 "adrfam": "IPv4", 00:20:55.822 "traddr": "10.0.0.2", 00:20:55.822 "trsvcid": "4420" 00:20:55.822 }, 00:20:55.822 "peer_address": { 00:20:55.822 "trtype": "TCP", 00:20:55.822 "adrfam": "IPv4", 00:20:55.822 "traddr": "10.0.0.1", 00:20:55.822 "trsvcid": "48698" 00:20:55.822 }, 00:20:55.822 "auth": { 00:20:55.822 "state": "completed", 00:20:55.822 "digest": "sha512", 00:20:55.822 "dhgroup": "ffdhe4096" 00:20:55.822 } 00:20:55.822 } 00:20:55.822 ]' 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.822 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.081 19:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.648 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.907 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.165 00:20:57.165 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.165 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.165 19:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.423 { 00:20:57.423 "cntlid": 127, 00:20:57.423 "qid": 0, 00:20:57.423 "state": "enabled", 00:20:57.423 "thread": "nvmf_tgt_poll_group_000", 00:20:57.423 "listen_address": { 00:20:57.423 "trtype": "TCP", 00:20:57.423 "adrfam": "IPv4", 00:20:57.423 "traddr": "10.0.0.2", 00:20:57.423 "trsvcid": "4420" 00:20:57.423 }, 00:20:57.423 "peer_address": { 00:20:57.423 "trtype": "TCP", 00:20:57.423 "adrfam": "IPv4", 00:20:57.423 "traddr": "10.0.0.1", 00:20:57.423 "trsvcid": "53962" 00:20:57.423 }, 00:20:57.423 "auth": { 00:20:57.423 "state": "completed", 00:20:57.423 "digest": "sha512", 00:20:57.423 "dhgroup": "ffdhe4096" 00:20:57.423 } 00:20:57.423 } 00:20:57.423 ]' 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.423 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.424 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.424 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.682 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.682 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.682 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.682 19:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:58.249 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.508 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.767 00:20:58.767 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.767 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.767 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.025 { 00:20:59.025 "cntlid": 129, 00:20:59.025 "qid": 0, 00:20:59.025 "state": "enabled", 00:20:59.025 "thread": "nvmf_tgt_poll_group_000", 00:20:59.025 "listen_address": { 00:20:59.025 "trtype": "TCP", 00:20:59.025 "adrfam": "IPv4", 00:20:59.025 "traddr": "10.0.0.2", 00:20:59.025 "trsvcid": "4420" 00:20:59.025 }, 00:20:59.025 "peer_address": { 00:20:59.025 "trtype": "TCP", 00:20:59.025 "adrfam": "IPv4", 00:20:59.025 "traddr": "10.0.0.1", 00:20:59.025 "trsvcid": "53990" 00:20:59.025 }, 00:20:59.025 "auth": { 00:20:59.025 "state": "completed", 00:20:59.025 "digest": "sha512", 00:20:59.025 "dhgroup": "ffdhe6144" 00:20:59.025 } 00:20:59.025 } 00:20:59.025 ]' 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.025 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.286 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.286 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.286 19:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.286 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.856 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.113 19:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.372 00:21:00.372 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.372 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.372 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.630 { 00:21:00.630 "cntlid": 131, 00:21:00.630 "qid": 0, 00:21:00.630 "state": "enabled", 00:21:00.630 "thread": "nvmf_tgt_poll_group_000", 00:21:00.630 "listen_address": { 00:21:00.630 "trtype": "TCP", 00:21:00.630 "adrfam": "IPv4", 00:21:00.630 "traddr": "10.0.0.2", 00:21:00.630 "trsvcid": "4420" 00:21:00.630 }, 00:21:00.630 "peer_address": { 00:21:00.630 "trtype": "TCP", 00:21:00.630 "adrfam": "IPv4", 00:21:00.630 "traddr": "10.0.0.1", 00:21:00.630 "trsvcid": "54028" 00:21:00.630 }, 00:21:00.630 "auth": { 00:21:00.630 "state": "completed", 00:21:00.630 "digest": "sha512", 00:21:00.630 "dhgroup": "ffdhe6144" 00:21:00.630 } 00:21:00.630 } 00:21:00.630 ]' 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.630 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.631 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.631 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.631 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.891 19:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.458 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.716 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.975 00:21:01.975 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.975 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.975 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.233 { 00:21:02.233 "cntlid": 133, 00:21:02.233 "qid": 0, 00:21:02.233 "state": "enabled", 00:21:02.233 "thread": "nvmf_tgt_poll_group_000", 00:21:02.233 "listen_address": { 00:21:02.233 "trtype": "TCP", 00:21:02.233 "adrfam": "IPv4", 00:21:02.233 "traddr": "10.0.0.2", 00:21:02.233 "trsvcid": "4420" 00:21:02.233 }, 00:21:02.233 "peer_address": { 00:21:02.233 "trtype": "TCP", 00:21:02.233 "adrfam": "IPv4", 00:21:02.233 "traddr": "10.0.0.1", 00:21:02.233 "trsvcid": "54060" 00:21:02.233 }, 00:21:02.233 "auth": { 00:21:02.233 "state": "completed", 00:21:02.233 "digest": "sha512", 00:21:02.233 "dhgroup": "ffdhe6144" 00:21:02.233 } 00:21:02.233 } 00:21:02.233 ]' 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.233 19:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.233 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.233 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.233 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.233 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.233 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.492 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.058 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.317 19:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:03.576 00:21:03.576 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.576 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.576 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.834 { 00:21:03.834 "cntlid": 135, 00:21:03.834 "qid": 0, 00:21:03.834 "state": "enabled", 00:21:03.834 "thread": "nvmf_tgt_poll_group_000", 00:21:03.834 "listen_address": { 00:21:03.834 "trtype": "TCP", 00:21:03.834 "adrfam": "IPv4", 00:21:03.834 "traddr": "10.0.0.2", 00:21:03.834 "trsvcid": "4420" 00:21:03.834 }, 00:21:03.834 "peer_address": { 00:21:03.834 "trtype": "TCP", 00:21:03.834 "adrfam": "IPv4", 00:21:03.834 "traddr": "10.0.0.1", 00:21:03.834 "trsvcid": "54086" 00:21:03.834 }, 00:21:03.834 "auth": { 00:21:03.834 "state": "completed", 00:21:03.834 "digest": "sha512", 00:21:03.834 "dhgroup": "ffdhe6144" 00:21:03.834 } 00:21:03.834 } 00:21:03.834 ]' 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.834 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.097 19:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.667 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.925 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.926 19:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.492 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.492 { 00:21:05.492 "cntlid": 137, 00:21:05.492 "qid": 0, 00:21:05.492 "state": "enabled", 00:21:05.492 "thread": "nvmf_tgt_poll_group_000", 00:21:05.492 "listen_address": { 00:21:05.492 "trtype": "TCP", 00:21:05.492 "adrfam": "IPv4", 00:21:05.492 "traddr": "10.0.0.2", 00:21:05.492 "trsvcid": "4420" 00:21:05.492 }, 00:21:05.492 "peer_address": { 00:21:05.492 "trtype": "TCP", 00:21:05.492 "adrfam": "IPv4", 00:21:05.492 "traddr": "10.0.0.1", 00:21:05.492 "trsvcid": "54104" 00:21:05.492 }, 00:21:05.492 "auth": { 00:21:05.492 "state": "completed", 00:21:05.492 "digest": "sha512", 00:21:05.492 "dhgroup": "ffdhe8192" 00:21:05.492 } 00:21:05.492 } 00:21:05.492 ]' 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.492 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.751 19:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.319 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.577 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.142 00:21:07.142 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.142 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.142 19:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.400 { 00:21:07.400 "cntlid": 139, 00:21:07.400 "qid": 0, 00:21:07.400 "state": "enabled", 00:21:07.400 "thread": "nvmf_tgt_poll_group_000", 00:21:07.400 "listen_address": { 00:21:07.400 "trtype": "TCP", 00:21:07.400 "adrfam": "IPv4", 00:21:07.400 "traddr": "10.0.0.2", 00:21:07.400 "trsvcid": "4420" 00:21:07.400 }, 00:21:07.400 "peer_address": { 00:21:07.400 "trtype": "TCP", 00:21:07.400 "adrfam": "IPv4", 00:21:07.400 "traddr": "10.0.0.1", 00:21:07.400 "trsvcid": "59184" 00:21:07.400 }, 00:21:07.400 "auth": { 00:21:07.400 "state": "completed", 00:21:07.400 "digest": "sha512", 00:21:07.400 "dhgroup": "ffdhe8192" 00:21:07.400 } 00:21:07.400 } 00:21:07.400 ]' 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.400 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.657 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg0MDM0M2M4YTBmODc1NDRhZTQyYmUzYTZjMDE0NjhjYDcB: --dhchap-ctrl-secret DHHC-1:02:MWQ5OTQ0MDU5YWYwMmFlZGMxOTVhYmU5ZGRhMmIyZDM4YjVhY2M1N2U3ZmY4YjQyM+jLGA==: 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.224 19:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.482 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.740 00:21:08.740 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.740 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.740 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.068 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.068 { 00:21:09.068 "cntlid": 141, 00:21:09.068 "qid": 0, 00:21:09.068 "state": "enabled", 00:21:09.068 "thread": "nvmf_tgt_poll_group_000", 00:21:09.068 "listen_address": { 00:21:09.068 "trtype": "TCP", 00:21:09.068 "adrfam": "IPv4", 00:21:09.068 "traddr": "10.0.0.2", 00:21:09.068 "trsvcid": "4420" 00:21:09.068 }, 00:21:09.068 "peer_address": { 00:21:09.068 "trtype": "TCP", 00:21:09.068 "adrfam": "IPv4", 00:21:09.068 "traddr": "10.0.0.1", 00:21:09.068 "trsvcid": "59222" 00:21:09.068 }, 00:21:09.069 "auth": { 00:21:09.069 "state": "completed", 00:21:09.069 "digest": "sha512", 00:21:09.069 "dhgroup": "ffdhe8192" 00:21:09.069 } 00:21:09.069 } 00:21:09.069 ]' 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.069 19:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.326 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWQwMWQxMDlhMGZmOGM1ZTg3NjhlNTk0YWFhNzcxMzg3ZGFmNmVjZDE4MWE5YmNhqOG2Wg==: --dhchap-ctrl-secret DHHC-1:01:YTM5ZmFiYzU3ZmZhOTkwN2MyNmZmMGYzZDIxZTVmY2Tzl0mt: 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.893 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.152 19:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.719 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.719 { 00:21:10.719 "cntlid": 143, 00:21:10.719 "qid": 0, 00:21:10.719 "state": "enabled", 00:21:10.719 "thread": "nvmf_tgt_poll_group_000", 00:21:10.719 "listen_address": { 00:21:10.719 "trtype": "TCP", 00:21:10.719 "adrfam": "IPv4", 00:21:10.719 "traddr": "10.0.0.2", 00:21:10.719 "trsvcid": "4420" 00:21:10.719 }, 00:21:10.719 "peer_address": { 00:21:10.719 "trtype": "TCP", 00:21:10.719 "adrfam": "IPv4", 00:21:10.719 "traddr": "10.0.0.1", 00:21:10.719 "trsvcid": "59246" 00:21:10.719 }, 00:21:10.719 "auth": { 00:21:10.719 "state": "completed", 00:21:10.719 "digest": "sha512", 00:21:10.719 "dhgroup": "ffdhe8192" 00:21:10.719 } 00:21:10.719 } 00:21:10.719 ]' 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.719 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.977 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.977 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.977 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.977 19:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.543 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.801 19:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.367 00:21:12.367 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.367 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.367 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.624 { 00:21:12.624 "cntlid": 145, 00:21:12.624 "qid": 0, 00:21:12.624 "state": "enabled", 00:21:12.624 "thread": "nvmf_tgt_poll_group_000", 00:21:12.624 "listen_address": { 00:21:12.624 "trtype": "TCP", 00:21:12.624 "adrfam": "IPv4", 00:21:12.624 "traddr": "10.0.0.2", 00:21:12.624 "trsvcid": "4420" 00:21:12.624 }, 00:21:12.624 "peer_address": { 00:21:12.624 "trtype": "TCP", 00:21:12.624 "adrfam": "IPv4", 00:21:12.624 "traddr": "10.0.0.1", 00:21:12.624 "trsvcid": "59274" 00:21:12.624 }, 00:21:12.624 "auth": { 00:21:12.624 "state": "completed", 00:21:12.624 "digest": "sha512", 00:21:12.624 "dhgroup": "ffdhe8192" 00:21:12.624 } 00:21:12.624 } 00:21:12.624 ]' 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.624 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.881 19:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Y2E3MDUyOGMzNzAyYTBiZGQ3NjljYjE2MzQxNWUzYWI5Y2JlYzZjMGUwMTcxN2M5RlXfBg==: --dhchap-ctrl-secret DHHC-1:03:Y2FhZjllODc4MWNiMDE3ZGFiMDY2YzFkOWRlOWE0ZDAwODY5YmY2ZDNkNGZmMzU1ODVjODA3YjliOGIxZWM3M0x7JtU=: 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.447 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:14.015 request: 00:21:14.015 { 00:21:14.015 "name": "nvme0", 00:21:14.015 "trtype": "tcp", 00:21:14.015 "traddr": "10.0.0.2", 00:21:14.015 "adrfam": "ipv4", 00:21:14.015 "trsvcid": "4420", 00:21:14.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:14.015 "prchk_reftag": false, 00:21:14.015 "prchk_guard": false, 00:21:14.015 "hdgst": false, 00:21:14.015 "ddgst": false, 00:21:14.015 "dhchap_key": "key2", 00:21:14.015 "method": "bdev_nvme_attach_controller", 00:21:14.015 "req_id": 1 00:21:14.015 } 00:21:14.015 Got JSON-RPC error response 00:21:14.015 response: 00:21:14.015 { 00:21:14.015 "code": -5, 00:21:14.015 "message": "Input/output error" 00:21:14.015 } 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.015 19:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.274 request: 00:21:14.274 { 00:21:14.274 "name": "nvme0", 00:21:14.274 "trtype": "tcp", 00:21:14.274 "traddr": "10.0.0.2", 00:21:14.274 "adrfam": "ipv4", 00:21:14.274 "trsvcid": "4420", 00:21:14.274 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:14.274 "prchk_reftag": false, 00:21:14.274 "prchk_guard": false, 00:21:14.274 "hdgst": false, 00:21:14.274 "ddgst": false, 00:21:14.274 "dhchap_key": "key1", 00:21:14.274 "dhchap_ctrlr_key": "ckey2", 00:21:14.274 "method": "bdev_nvme_attach_controller", 00:21:14.274 "req_id": 1 00:21:14.274 } 00:21:14.274 Got JSON-RPC error response 00:21:14.274 response: 00:21:14.274 { 00:21:14.274 "code": -5, 00:21:14.274 "message": "Input/output error" 00:21:14.274 } 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.274 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.842 request: 00:21:14.842 { 00:21:14.842 "name": "nvme0", 00:21:14.842 "trtype": "tcp", 00:21:14.842 "traddr": "10.0.0.2", 00:21:14.842 "adrfam": "ipv4", 00:21:14.842 "trsvcid": "4420", 00:21:14.842 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:14.842 "prchk_reftag": false, 00:21:14.842 "prchk_guard": false, 00:21:14.842 "hdgst": false, 00:21:14.842 "ddgst": false, 00:21:14.842 "dhchap_key": "key1", 00:21:14.842 "dhchap_ctrlr_key": "ckey1", 00:21:14.842 "method": "bdev_nvme_attach_controller", 00:21:14.842 "req_id": 1 00:21:14.842 } 00:21:14.842 Got JSON-RPC error response 00:21:14.842 response: 00:21:14.842 { 00:21:14.842 "code": -5, 00:21:14.842 "message": "Input/output error" 00:21:14.842 } 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1635165 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1635165 ']' 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1635165 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635165 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635165' 00:21:14.842 killing process with pid 1635165 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1635165 00:21:14.842 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1635165 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1656050 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1656050 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1656050 ']' 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1656050 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1656050 ']' 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.101 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.102 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.102 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.102 19:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.360 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.360 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:15.360 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:15.360 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.360 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.619 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.878 00:21:15.878 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.878 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.878 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.137 { 00:21:16.137 "cntlid": 1, 00:21:16.137 "qid": 0, 00:21:16.137 "state": "enabled", 00:21:16.137 "thread": "nvmf_tgt_poll_group_000", 00:21:16.137 "listen_address": { 00:21:16.137 "trtype": "TCP", 00:21:16.137 "adrfam": "IPv4", 00:21:16.137 "traddr": "10.0.0.2", 00:21:16.137 "trsvcid": "4420" 00:21:16.137 }, 00:21:16.137 "peer_address": { 00:21:16.137 "trtype": "TCP", 00:21:16.137 "adrfam": "IPv4", 00:21:16.137 "traddr": "10.0.0.1", 00:21:16.137 "trsvcid": "59330" 00:21:16.137 }, 00:21:16.137 "auth": { 00:21:16.137 "state": "completed", 00:21:16.137 "digest": "sha512", 00:21:16.137 "dhgroup": "ffdhe8192" 00:21:16.137 } 00:21:16.137 } 00:21:16.137 ]' 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.137 19:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.396 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDYxOGUyYTZjNGYzYzAyOTU5MDk2MDQyMGVmMTk2YWQzZmNmNjlmMTBmYzhjYjVhODllZTFiYTRlNzIzNmQzNjy9lcU=: 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:16.963 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.222 19:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.493 request: 00:21:17.493 { 00:21:17.493 "name": "nvme0", 00:21:17.493 "trtype": "tcp", 00:21:17.493 "traddr": "10.0.0.2", 00:21:17.493 "adrfam": "ipv4", 00:21:17.493 "trsvcid": "4420", 00:21:17.493 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:17.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:17.493 "prchk_reftag": false, 00:21:17.493 "prchk_guard": false, 00:21:17.493 "hdgst": false, 00:21:17.493 "ddgst": false, 00:21:17.493 "dhchap_key": "key3", 00:21:17.493 "method": "bdev_nvme_attach_controller", 00:21:17.493 "req_id": 1 00:21:17.493 } 00:21:17.493 Got JSON-RPC error response 00:21:17.493 response: 00:21:17.493 { 00:21:17.493 "code": -5, 00:21:17.493 "message": "Input/output error" 00:21:17.493 } 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:17.493 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.494 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.494 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.753 request: 00:21:17.753 { 00:21:17.753 "name": "nvme0", 00:21:17.753 "trtype": "tcp", 00:21:17.753 "traddr": "10.0.0.2", 00:21:17.753 "adrfam": "ipv4", 00:21:17.753 "trsvcid": "4420", 00:21:17.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:17.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:17.753 "prchk_reftag": false, 00:21:17.753 "prchk_guard": false, 00:21:17.753 "hdgst": false, 00:21:17.753 "ddgst": false, 00:21:17.753 "dhchap_key": "key3", 00:21:17.753 "method": "bdev_nvme_attach_controller", 00:21:17.753 "req_id": 1 00:21:17.753 } 00:21:17.753 Got JSON-RPC error response 00:21:17.753 response: 00:21:17.753 { 00:21:17.753 "code": -5, 00:21:17.753 "message": "Input/output error" 00:21:17.753 } 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.753 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.012 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:18.271 request: 00:21:18.271 { 00:21:18.271 "name": "nvme0", 00:21:18.271 "trtype": "tcp", 00:21:18.271 "traddr": "10.0.0.2", 00:21:18.271 "adrfam": "ipv4", 00:21:18.271 "trsvcid": "4420", 00:21:18.271 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:18.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:18.271 "prchk_reftag": false, 00:21:18.271 "prchk_guard": false, 00:21:18.271 "hdgst": false, 00:21:18.271 "ddgst": false, 00:21:18.271 "dhchap_key": "key0", 00:21:18.271 "dhchap_ctrlr_key": "key1", 00:21:18.271 "method": "bdev_nvme_attach_controller", 00:21:18.271 "req_id": 1 00:21:18.271 } 00:21:18.271 Got JSON-RPC error response 00:21:18.271 response: 00:21:18.271 { 00:21:18.271 "code": -5, 00:21:18.271 "message": "Input/output error" 00:21:18.271 } 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.271 19:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.271 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.530 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1635317 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1635317 ']' 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1635317 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635317 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.789 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635317' 00:21:18.789 killing process with pid 1635317 00:21:18.790 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1635317 00:21:18.790 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1635317 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.049 rmmod nvme_tcp 00:21:19.049 rmmod nvme_fabrics 00:21:19.049 rmmod nvme_keyring 00:21:19.049 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1656050 ']' 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1656050 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1656050 ']' 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1656050 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1656050 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1656050' 00:21:19.308 killing process with pid 1656050 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1656050 00:21:19.308 19:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1656050 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.308 19:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.842 19:27:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.843 19:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CBL /tmp/spdk.key-sha256.sLc /tmp/spdk.key-sha384.VT7 /tmp/spdk.key-sha512.FeL /tmp/spdk.key-sha512.1gv /tmp/spdk.key-sha384.IAs /tmp/spdk.key-sha256.Lya '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:21.843 00:21:21.843 real 2m9.602s 00:21:21.843 user 4m58.138s 00:21:21.843 sys 0m20.469s 00:21:21.843 19:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.843 19:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.843 ************************************ 00:21:21.843 END TEST nvmf_auth_target 00:21:21.843 ************************************ 00:21:21.843 19:27:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:21.843 19:27:32 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:21.843 19:27:32 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:21.843 19:27:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:21.843 19:27:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.843 19:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.843 ************************************ 00:21:21.843 START TEST nvmf_bdevio_no_huge 00:21:21.843 ************************************ 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:21.843 * Looking for test storage... 00:21:21.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.843 19:27:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.115 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.115 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.115 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.115 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.116 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.116 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:21:27.116 00:21:27.116 --- 10.0.0.2 ping statistics --- 00:21:27.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.116 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:21:27.116 00:21:27.116 --- 10.0.0.1 ping statistics --- 00:21:27.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.116 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1660092 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1660092 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1660092 ']' 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.116 19:27:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 [2024-07-15 19:27:37.779115] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:21:27.116 [2024-07-15 19:27:37.779161] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:27.117 [2024-07-15 19:27:37.819531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:27.117 [2024-07-15 19:27:37.840696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.117 [2024-07-15 19:27:37.906022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.117 [2024-07-15 19:27:37.906057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.117 [2024-07-15 19:27:37.906064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.117 [2024-07-15 19:27:37.906071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.117 [2024-07-15 19:27:37.906076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.117 [2024-07-15 19:27:37.906199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.117 [2024-07-15 19:27:37.906308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:27.117 [2024-07-15 19:27:37.906415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.117 [2024-07-15 19:27:37.906416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 [2024-07-15 19:27:38.635638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 Malloc0 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:28.050 [2024-07-15 19:27:38.679896] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.050 { 00:21:28.050 "params": { 00:21:28.050 "name": "Nvme$subsystem", 00:21:28.050 "trtype": "$TEST_TRANSPORT", 00:21:28.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.050 "adrfam": "ipv4", 00:21:28.050 "trsvcid": "$NVMF_PORT", 00:21:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.050 "hdgst": ${hdgst:-false}, 00:21:28.050 "ddgst": ${ddgst:-false} 00:21:28.050 }, 00:21:28.050 "method": "bdev_nvme_attach_controller" 00:21:28.050 } 00:21:28.050 EOF 00:21:28.050 )") 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:28.050 19:27:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:28.050 "params": { 00:21:28.050 "name": "Nvme1", 00:21:28.050 "trtype": "tcp", 00:21:28.050 "traddr": "10.0.0.2", 00:21:28.050 "adrfam": "ipv4", 00:21:28.050 "trsvcid": "4420", 00:21:28.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.050 "hdgst": false, 00:21:28.050 "ddgst": false 00:21:28.050 }, 00:21:28.051 "method": "bdev_nvme_attach_controller" 00:21:28.051 }' 00:21:28.051 [2024-07-15 19:27:38.729918] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:21:28.051 [2024-07-15 19:27:38.729966] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1660332 ] 00:21:28.051 [2024-07-15 19:27:38.764753] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:28.051 [2024-07-15 19:27:38.784942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.051 [2024-07-15 19:27:38.851451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.051 [2024-07-15 19:27:38.851546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.051 [2024-07-15 19:27:38.851547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.616 I/O targets: 00:21:28.616 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:28.616 00:21:28.616 00:21:28.616 CUnit - A unit testing framework for C - Version 2.1-3 00:21:28.616 http://cunit.sourceforge.net/ 00:21:28.616 00:21:28.616 00:21:28.616 Suite: bdevio tests on: Nvme1n1 00:21:28.616 Test: blockdev write read block ...passed 00:21:28.616 Test: blockdev write zeroes read block ...passed 00:21:28.616 Test: blockdev write zeroes read no split ...passed 00:21:28.616 Test: blockdev write zeroes read split ...passed 00:21:28.616 Test: blockdev write zeroes read split partial ...passed 00:21:28.616 Test: blockdev reset ...[2024-07-15 19:27:39.397610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:28.616 [2024-07-15 19:27:39.397671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf46e0 (9): Bad file descriptor 00:21:28.873 [2024-07-15 19:27:39.508645] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:28.873 passed 00:21:28.873 Test: blockdev write read 8 blocks ...passed 00:21:28.873 Test: blockdev write read size > 128k ...passed 00:21:28.873 Test: blockdev write read invalid size ...passed 00:21:28.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:28.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:28.873 Test: blockdev write read max offset ...passed 00:21:28.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:28.873 Test: blockdev writev readv 8 blocks ...passed 00:21:28.873 Test: blockdev writev readv 30 x 1block ...passed 00:21:28.873 Test: blockdev writev readv block ...passed 00:21:28.873 Test: blockdev writev readv size > 128k ...passed 00:21:28.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:28.873 Test: blockdev comparev and writev ...[2024-07-15 19:27:39.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.719628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.719642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.719930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.719939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.719951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.719957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.720237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.720248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.720259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.720266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.720542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:28.873 [2024-07-15 19:27:39.720563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.873 [2024-07-15 19:27:39.720570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:29.130 passed 00:21:29.130 Test: blockdev nvme passthru rw ...passed 00:21:29.130 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:27:39.802669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:29.130 [2024-07-15 19:27:39.802684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:29.130 [2024-07-15 19:27:39.802826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:29.130 [2024-07-15 19:27:39.802836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:29.130 [2024-07-15 19:27:39.802976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:29.130 [2024-07-15 19:27:39.802985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:29.130 [2024-07-15 19:27:39.803136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:29.130 [2024-07-15 19:27:39.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:29.130 passed 00:21:29.130 Test: blockdev nvme admin passthru ...passed 00:21:29.130 Test: blockdev copy ...passed 00:21:29.130 00:21:29.130 Run Summary: Type Total Ran Passed Failed Inactive 00:21:29.130 suites 1 1 n/a 0 0 00:21:29.130 tests 23 23 23 0 0 00:21:29.130 asserts 152 152 152 0 n/a 00:21:29.130 00:21:29.130 Elapsed time = 1.343 seconds 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.387 rmmod nvme_tcp 00:21:29.387 rmmod nvme_fabrics 00:21:29.387 rmmod nvme_keyring 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1660092 ']' 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1660092 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1660092 ']' 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1660092 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.387 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1660092 00:21:29.673 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:29.673 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:29.673 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1660092' 00:21:29.673 killing process with pid 1660092 00:21:29.673 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1660092 00:21:29.673 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1660092 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.946 19:27:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.849 19:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.849 00:21:31.849 real 0m10.347s 00:21:31.849 user 0m14.507s 00:21:31.849 sys 0m4.926s 00:21:31.849 19:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.849 19:27:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:31.849 ************************************ 00:21:31.849 END TEST nvmf_bdevio_no_huge 00:21:31.849 ************************************ 00:21:31.849 19:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:31.849 19:27:42 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:31.849 19:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:31.849 19:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.849 19:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.849 ************************************ 00:21:31.849 START TEST nvmf_tls 00:21:31.849 ************************************ 00:21:31.849 19:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:32.108 * Looking for test storage... 00:21:32.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.108 19:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:37.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:37.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:37.378 Found net devices under 0000:86:00.0: cvl_0_0 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:37.378 Found net devices under 0000:86:00.1: cvl_0_1 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.378 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:37.379 00:21:37.379 --- 10.0.0.2 ping statistics --- 00:21:37.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.379 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:21:37.379 00:21:37.379 --- 10.0.0.1 ping statistics --- 00:21:37.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.379 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1663860 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1663860 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1663860 ']' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:37.379 [2024-07-15 19:27:47.471554] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:21:37.379 [2024-07-15 19:27:47.471597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.379 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.379 [2024-07-15 19:27:47.502242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:37.379 [2024-07-15 19:27:47.529796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.379 [2024-07-15 19:27:47.569553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.379 [2024-07-15 19:27:47.569590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.379 [2024-07-15 19:27:47.569597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.379 [2024-07-15 19:27:47.569602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.379 [2024-07-15 19:27:47.569608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.379 [2024-07-15 19:27:47.569646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:37.379 true 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:37.379 19:27:47 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:37.379 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.379 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:37.638 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:37.638 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:37.638 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:37.638 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.638 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:37.897 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:37.897 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:37.897 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.897 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:38.166 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:38.166 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:38.166 19:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:38.166 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.166 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:38.425 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:38.425 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:38.425 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.684 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3Qt3Pg8nkI 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.WOIWxY69JM 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3Qt3Pg8nkI 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WOIWxY69JM 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:38.943 19:27:49 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:39.202 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3Qt3Pg8nkI 00:21:39.202 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3Qt3Pg8nkI 00:21:39.202 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.461 [2024-07-15 19:27:50.167033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.461 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.720 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.720 [2024-07-15 19:27:50.495854] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.720 [2024-07-15 19:27:50.496064] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.720 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.979 malloc0 00:21:39.979 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.237 19:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3Qt3Pg8nkI 00:21:40.237 [2024-07-15 19:27:50.997278] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.237 19:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3Qt3Pg8nkI 00:21:40.237 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.447 Initializing NVMe Controllers 00:21:52.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:52.447 Initialization complete. Launching workers. 00:21:52.447 ======================================================== 00:21:52.447 Latency(us) 00:21:52.447 Device Information : IOPS MiB/s Average min max 00:21:52.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16378.42 63.98 3908.01 777.02 5865.82 00:21:52.447 ======================================================== 00:21:52.447 Total : 16378.42 63.98 3908.01 777.02 5865.82 00:21:52.447 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3Qt3Pg8nkI 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3Qt3Pg8nkI' 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1666193 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1666193 /var/tmp/bdevperf.sock 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1666193 ']' 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.447 [2024-07-15 19:28:01.137768] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:21:52.447 [2024-07-15 19:28:01.137813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666193 ] 00:21:52.447 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.447 [2024-07-15 19:28:01.164115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:52.447 [2024-07-15 19:28:01.188106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.447 [2024-07-15 19:28:01.228833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3Qt3Pg8nkI 00:21:52.447 [2024-07-15 19:28:01.463639] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.447 [2024-07-15 19:28:01.463709] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:52.447 TLSTESTn1 00:21:52.447 19:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:52.447 Running I/O for 10 seconds... 00:22:02.425 00:22:02.425 Latency(us) 00:22:02.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.425 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.425 Verification LBA range: start 0x0 length 0x2000 00:22:02.425 TLSTESTn1 : 10.07 4490.89 17.54 0.00 0.00 28428.27 6097.70 74768.03 00:22:02.425 =================================================================================================================== 00:22:02.425 Total : 4490.89 17.54 0.00 0.00 28428.27 6097.70 74768.03 00:22:02.425 0 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1666193 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1666193 ']' 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1666193 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666193 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666193' 00:22:02.425 killing process with pid 1666193 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1666193 00:22:02.425 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.425 00:22:02.425 Latency(us) 00:22:02.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.425 =================================================================================================================== 00:22:02.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.425 [2024-07-15 19:28:11.791179] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1666193 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WOIWxY69JM 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WOIWxY69JM 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WOIWxY69JM 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WOIWxY69JM' 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1667950 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1667950 /var/tmp/bdevperf.sock 00:22:02.425 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1667950 ']' 00:22:02.426 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.426 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.426 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.426 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.426 19:28:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.426 [2024-07-15 19:28:12.010389] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:02.426 [2024-07-15 19:28:12.010437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667950 ] 00:22:02.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.426 [2024-07-15 19:28:12.037238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.426 [2024-07-15 19:28:12.061396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.426 [2024-07-15 19:28:12.102595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WOIWxY69JM 00:22:02.426 [2024-07-15 19:28:12.338910] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.426 [2024-07-15 19:28:12.338977] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.426 [2024-07-15 19:28:12.348103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.426 [2024-07-15 19:28:12.348316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade80 (107): Transport endpoint is not connected 00:22:02.426 [2024-07-15 19:28:12.349306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ade80 (9): Bad file descriptor 00:22:02.426 [2024-07-15 19:28:12.350308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.426 [2024-07-15 19:28:12.350317] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.426 [2024-07-15 19:28:12.350328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.426 request: 00:22:02.426 { 00:22:02.426 "name": "TLSTEST", 00:22:02.426 "trtype": "tcp", 00:22:02.426 "traddr": "10.0.0.2", 00:22:02.426 "adrfam": "ipv4", 00:22:02.426 "trsvcid": "4420", 00:22:02.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.426 "prchk_reftag": false, 00:22:02.426 "prchk_guard": false, 00:22:02.426 "hdgst": false, 00:22:02.426 "ddgst": false, 00:22:02.426 "psk": "/tmp/tmp.WOIWxY69JM", 00:22:02.426 "method": "bdev_nvme_attach_controller", 00:22:02.426 "req_id": 1 00:22:02.426 } 00:22:02.426 Got JSON-RPC error response 00:22:02.426 response: 00:22:02.426 { 00:22:02.426 "code": -5, 00:22:02.426 "message": "Input/output error" 00:22:02.426 } 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1667950 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1667950 ']' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1667950 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1667950 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1667950' 00:22:02.426 killing process with pid 1667950 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1667950 00:22:02.426 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.426 00:22:02.426 Latency(us) 00:22:02.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.426 =================================================================================================================== 00:22:02.426 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.426 [2024-07-15 19:28:12.417193] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1667950 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3Qt3Pg8nkI 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3Qt3Pg8nkI 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3Qt3Pg8nkI 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3Qt3Pg8nkI' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1668037 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1668037 /var/tmp/bdevperf.sock 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1668037 ']' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.426 [2024-07-15 19:28:12.624824] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:02.426 [2024-07-15 19:28:12.624876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668037 ] 00:22:02.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.426 [2024-07-15 19:28:12.651235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.426 [2024-07-15 19:28:12.675114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.426 [2024-07-15 19:28:12.715462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3Qt3Pg8nkI 00:22:02.426 [2024-07-15 19:28:12.954509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.426 [2024-07-15 19:28:12.954576] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.426 [2024-07-15 19:28:12.959592] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.426 [2024-07-15 19:28:12.959613] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.426 [2024-07-15 19:28:12.959641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.426 [2024-07-15 19:28:12.959921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ace80 (107): Transport endpoint is not connected 00:22:02.426 [2024-07-15 19:28:12.960912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ace80 (9): Bad file descriptor 00:22:02.426 [2024-07-15 19:28:12.961914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.426 [2024-07-15 19:28:12.961923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.426 [2024-07-15 19:28:12.961931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.426 request: 00:22:02.426 { 00:22:02.426 "name": "TLSTEST", 00:22:02.426 "trtype": "tcp", 00:22:02.426 "traddr": "10.0.0.2", 00:22:02.426 "adrfam": "ipv4", 00:22:02.426 "trsvcid": "4420", 00:22:02.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.426 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.426 "prchk_reftag": false, 00:22:02.426 "prchk_guard": false, 00:22:02.426 "hdgst": false, 00:22:02.426 "ddgst": false, 00:22:02.426 "psk": "/tmp/tmp.3Qt3Pg8nkI", 00:22:02.426 "method": "bdev_nvme_attach_controller", 00:22:02.426 "req_id": 1 00:22:02.426 } 00:22:02.426 Got JSON-RPC error response 00:22:02.426 response: 00:22:02.426 { 00:22:02.426 "code": -5, 00:22:02.426 "message": "Input/output error" 00:22:02.426 } 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1668037 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1668037 ']' 00:22:02.426 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1668037 00:22:02.427 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.427 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.427 19:28:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668037 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668037' 00:22:02.427 killing process with pid 1668037 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1668037 00:22:02.427 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.427 00:22:02.427 Latency(us) 00:22:02.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.427 =================================================================================================================== 00:22:02.427 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.427 [2024-07-15 19:28:13.018718] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1668037 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3Qt3Pg8nkI 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3Qt3Pg8nkI 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3Qt3Pg8nkI 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3Qt3Pg8nkI' 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1668084 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1668084 /var/tmp/bdevperf.sock 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1668084 ']' 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.427 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.427 [2024-07-15 19:28:13.226859] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:02.427 [2024-07-15 19:28:13.226906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668084 ] 00:22:02.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.427 [2024-07-15 19:28:13.253340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.427 [2024-07-15 19:28:13.277469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.686 [2024-07-15 19:28:13.318443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.686 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.686 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.686 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3Qt3Pg8nkI 00:22:02.945 [2024-07-15 19:28:13.558843] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.945 [2024-07-15 19:28:13.558925] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.945 [2024-07-15 19:28:13.563505] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.945 [2024-07-15 19:28:13.563524] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.945 [2024-07-15 19:28:13.563545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.945 [2024-07-15 19:28:13.564217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ee80 (107): Transport endpoint is not connected 00:22:02.945 [2024-07-15 19:28:13.565210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ee80 (9): Bad file descriptor 00:22:02.945 [2024-07-15 19:28:13.566211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:02.945 [2024-07-15 19:28:13.566220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.945 [2024-07-15 19:28:13.566231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:02.945 request: 00:22:02.945 { 00:22:02.945 "name": "TLSTEST", 00:22:02.945 "trtype": "tcp", 00:22:02.945 "traddr": "10.0.0.2", 00:22:02.945 "adrfam": "ipv4", 00:22:02.945 "trsvcid": "4420", 00:22:02.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.945 "prchk_reftag": false, 00:22:02.945 "prchk_guard": false, 00:22:02.945 "hdgst": false, 00:22:02.945 "ddgst": false, 00:22:02.945 "psk": "/tmp/tmp.3Qt3Pg8nkI", 00:22:02.945 "method": "bdev_nvme_attach_controller", 00:22:02.945 "req_id": 1 00:22:02.945 } 00:22:02.945 Got JSON-RPC error response 00:22:02.945 response: 00:22:02.945 { 00:22:02.945 "code": -5, 00:22:02.945 "message": "Input/output error" 00:22:02.945 } 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1668084 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1668084 ']' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1668084 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668084 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668084' 00:22:02.945 killing process with pid 1668084 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1668084 00:22:02.945 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.945 00:22:02.945 Latency(us) 00:22:02.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.945 =================================================================================================================== 00:22:02.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.945 [2024-07-15 19:28:13.626144] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1668084 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1668282 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1668282 /var/tmp/bdevperf.sock 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1668282 ']' 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.945 19:28:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.204 [2024-07-15 19:28:13.839825] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:03.204 [2024-07-15 19:28:13.839870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668282 ] 00:22:03.204 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.204 [2024-07-15 19:28:13.865917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.204 [2024-07-15 19:28:13.890736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.204 [2024-07-15 19:28:13.926935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.204 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.204 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:03.204 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.463 [2024-07-15 19:28:14.162661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.463 [2024-07-15 19:28:14.164436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182b3f0 (9): Bad file descriptor 00:22:03.463 [2024-07-15 19:28:14.165433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.463 [2024-07-15 19:28:14.165444] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.463 [2024-07-15 19:28:14.165452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.463 request: 00:22:03.463 { 00:22:03.463 "name": "TLSTEST", 00:22:03.463 "trtype": "tcp", 00:22:03.463 "traddr": "10.0.0.2", 00:22:03.463 "adrfam": "ipv4", 00:22:03.463 "trsvcid": "4420", 00:22:03.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.463 "prchk_reftag": false, 00:22:03.463 "prchk_guard": false, 00:22:03.463 "hdgst": false, 00:22:03.463 "ddgst": false, 00:22:03.463 "method": "bdev_nvme_attach_controller", 00:22:03.463 "req_id": 1 00:22:03.463 } 00:22:03.463 Got JSON-RPC error response 00:22:03.463 response: 00:22:03.463 { 00:22:03.463 "code": -5, 00:22:03.463 "message": "Input/output error" 00:22:03.463 } 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1668282 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1668282 ']' 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1668282 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668282 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668282' 00:22:03.463 killing process with pid 1668282 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1668282 00:22:03.463 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.463 00:22:03.463 Latency(us) 00:22:03.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.463 =================================================================================================================== 00:22:03.463 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.463 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1668282 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1663860 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1663860 ']' 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1663860 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1663860 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1663860' 00:22:03.722 killing process with pid 1663860 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1663860 00:22:03.722 [2024-07-15 19:28:14.415869] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.722 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1663860 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IMYo7rCYwo 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IMYo7rCYwo 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:03.981 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1668378 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1668378 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1668378 ']' 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.982 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.982 [2024-07-15 19:28:14.705532] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:03.982 [2024-07-15 19:28:14.705580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.982 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.982 [2024-07-15 19:28:14.735328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.982 [2024-07-15 19:28:14.764079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.982 [2024-07-15 19:28:14.803325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.982 [2024-07-15 19:28:14.803363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.982 [2024-07-15 19:28:14.803370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.982 [2024-07-15 19:28:14.803376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.982 [2024-07-15 19:28:14.803381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.982 [2024-07-15 19:28:14.803400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMYo7rCYwo 00:22:04.241 19:28:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.241 [2024-07-15 19:28:15.075561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.241 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.500 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.759 [2024-07-15 19:28:15.420438] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.759 [2024-07-15 19:28:15.420642] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.759 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.759 malloc0 00:22:04.759 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.018 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:05.313 [2024-07-15 19:28:15.945962] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMYo7rCYwo 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMYo7rCYwo' 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1668578 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.313 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1668578 /var/tmp/bdevperf.sock 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1668578 ']' 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.314 19:28:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.314 [2024-07-15 19:28:16.004813] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:05.314 [2024-07-15 19:28:16.004859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668578 ] 00:22:05.314 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.314 [2024-07-15 19:28:16.031382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:05.314 [2024-07-15 19:28:16.054772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.314 [2024-07-15 19:28:16.093541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.580 19:28:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.581 19:28:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:05.581 19:28:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:05.581 [2024-07-15 19:28:16.333008] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.581 [2024-07-15 19:28:16.333077] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:05.581 TLSTESTn1 00:22:05.581 19:28:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:05.839 Running I/O for 10 seconds... 00:22:15.809 00:22:15.809 Latency(us) 00:22:15.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.809 Verification LBA range: start 0x0 length 0x2000 00:22:15.809 TLSTESTn1 : 10.02 5465.35 21.35 0.00 0.00 23378.91 7294.44 47413.87 00:22:15.809 =================================================================================================================== 00:22:15.809 Total : 5465.35 21.35 0.00 0.00 23378.91 7294.44 47413.87 00:22:15.809 0 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1668578 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1668578 ']' 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1668578 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668578 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668578' 00:22:15.809 killing process with pid 1668578 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1668578 00:22:15.809 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.809 00:22:15.809 Latency(us) 00:22:15.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.809 =================================================================================================================== 00:22:15.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.809 [2024-07-15 19:28:26.619501] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.809 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1668578 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IMYo7rCYwo 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMYo7rCYwo 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMYo7rCYwo 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IMYo7rCYwo 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IMYo7rCYwo' 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1670389 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1670389 /var/tmp/bdevperf.sock 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1670389 ']' 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.069 19:28:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.069 [2024-07-15 19:28:26.844117] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:16.069 [2024-07-15 19:28:26.844162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670389 ] 00:22:16.069 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.069 [2024-07-15 19:28:26.870102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.069 [2024-07-15 19:28:26.894244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.328 [2024-07-15 19:28:26.934772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.328 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.328 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.328 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:16.328 [2024-07-15 19:28:27.166203] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.328 [2024-07-15 19:28:27.166251] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:16.328 [2024-07-15 19:28:27.166259] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IMYo7rCYwo 00:22:16.328 request: 00:22:16.328 { 00:22:16.328 "name": "TLSTEST", 00:22:16.328 "trtype": "tcp", 00:22:16.328 "traddr": "10.0.0.2", 00:22:16.328 "adrfam": "ipv4", 00:22:16.328 "trsvcid": "4420", 00:22:16.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.328 "prchk_reftag": false, 00:22:16.328 "prchk_guard": false, 00:22:16.328 "hdgst": false, 00:22:16.328 "ddgst": false, 00:22:16.328 "psk": "/tmp/tmp.IMYo7rCYwo", 00:22:16.328 "method": "bdev_nvme_attach_controller", 00:22:16.328 "req_id": 1 00:22:16.328 } 00:22:16.328 Got JSON-RPC error response 00:22:16.328 response: 00:22:16.328 { 00:22:16.328 "code": -1, 00:22:16.328 "message": "Operation not permitted" 00:22:16.328 } 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1670389 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1670389 ']' 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1670389 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1670389 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1670389' 00:22:16.587 killing process with pid 1670389 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1670389 00:22:16.587 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.587 00:22:16.587 Latency(us) 00:22:16.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.587 =================================================================================================================== 00:22:16.587 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1670389 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1668378 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1668378 ']' 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1668378 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.587 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668378 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668378' 00:22:16.846 killing process with pid 1668378 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1668378 00:22:16.846 [2024-07-15 19:28:27.449215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1668378 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1670616 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1670616 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1670616 ']' 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.846 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.846 [2024-07-15 19:28:27.689136] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:16.846 [2024-07-15 19:28:27.689184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.106 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.106 [2024-07-15 19:28:27.719178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.106 [2024-07-15 19:28:27.748828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.106 [2024-07-15 19:28:27.785190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.106 [2024-07-15 19:28:27.785242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.106 [2024-07-15 19:28:27.785248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.106 [2024-07-15 19:28:27.785254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.106 [2024-07-15 19:28:27.785258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.106 [2024-07-15 19:28:27.785275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMYo7rCYwo 00:22:17.106 19:28:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.365 [2024-07-15 19:28:28.065425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.365 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:17.624 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:17.624 [2024-07-15 19:28:28.402299] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.624 [2024-07-15 19:28:28.402495] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.624 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:17.882 malloc0 00:22:17.882 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:18.140 [2024-07-15 19:28:28.915690] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:18.140 [2024-07-15 19:28:28.915718] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:18.140 [2024-07-15 19:28:28.915742] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:18.140 request: 00:22:18.140 { 00:22:18.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.140 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.140 "psk": "/tmp/tmp.IMYo7rCYwo", 00:22:18.140 "method": "nvmf_subsystem_add_host", 00:22:18.140 "req_id": 1 00:22:18.140 } 00:22:18.140 Got JSON-RPC error response 00:22:18.140 response: 00:22:18.140 { 00:22:18.140 "code": -32603, 00:22:18.140 "message": "Internal error" 00:22:18.140 } 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1670616 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1670616 ']' 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1670616 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1670616 00:22:18.140 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:18.141 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:18.141 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1670616' 00:22:18.141 killing process with pid 1670616 00:22:18.141 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1670616 00:22:18.141 19:28:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1670616 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IMYo7rCYwo 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1670891 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1670891 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1670891 ']' 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.399 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.399 [2024-07-15 19:28:29.214853] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:18.399 [2024-07-15 19:28:29.214900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.399 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.399 [2024-07-15 19:28:29.243888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:18.658 [2024-07-15 19:28:29.270132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.658 [2024-07-15 19:28:29.309831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.658 [2024-07-15 19:28:29.309869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.658 [2024-07-15 19:28:29.309876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.658 [2024-07-15 19:28:29.309883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.658 [2024-07-15 19:28:29.309888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.658 [2024-07-15 19:28:29.309904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMYo7rCYwo 00:22:18.658 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.916 [2024-07-15 19:28:29.582526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.916 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.916 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.175 [2024-07-15 19:28:29.919394] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.175 [2024-07-15 19:28:29.919584] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.175 19:28:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.433 malloc0 00:22:19.433 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:19.691 [2024-07-15 19:28:30.448874] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1671142 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1671142 /var/tmp/bdevperf.sock 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1671142 ']' 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.691 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.691 [2024-07-15 19:28:30.509139] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:19.692 [2024-07-15 19:28:30.509185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671142 ] 00:22:19.692 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.692 [2024-07-15 19:28:30.534898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.950 [2024-07-15 19:28:30.561017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.950 [2024-07-15 19:28:30.600115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.950 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.950 19:28:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:19.950 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:20.209 [2024-07-15 19:28:30.830968] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.209 [2024-07-15 19:28:30.831045] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.209 TLSTESTn1 00:22:20.209 19:28:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:20.467 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:20.467 "subsystems": [ 00:22:20.467 { 00:22:20.467 "subsystem": "keyring", 00:22:20.467 "config": [] 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "subsystem": "iobuf", 00:22:20.467 "config": [ 00:22:20.467 { 00:22:20.467 "method": "iobuf_set_options", 00:22:20.467 "params": { 00:22:20.467 "small_pool_count": 8192, 00:22:20.467 "large_pool_count": 1024, 00:22:20.467 "small_bufsize": 8192, 00:22:20.467 "large_bufsize": 135168 00:22:20.467 } 00:22:20.467 } 00:22:20.467 ] 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "subsystem": "sock", 00:22:20.467 "config": [ 00:22:20.467 { 00:22:20.467 "method": "sock_set_default_impl", 00:22:20.467 "params": { 00:22:20.467 "impl_name": "posix" 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "sock_impl_set_options", 00:22:20.467 "params": { 00:22:20.467 "impl_name": "ssl", 00:22:20.467 "recv_buf_size": 4096, 00:22:20.467 "send_buf_size": 4096, 00:22:20.467 "enable_recv_pipe": true, 00:22:20.467 "enable_quickack": false, 00:22:20.467 "enable_placement_id": 0, 00:22:20.467 "enable_zerocopy_send_server": true, 00:22:20.467 "enable_zerocopy_send_client": false, 00:22:20.467 "zerocopy_threshold": 0, 00:22:20.467 "tls_version": 0, 00:22:20.467 "enable_ktls": false 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "sock_impl_set_options", 00:22:20.467 "params": { 00:22:20.467 "impl_name": "posix", 00:22:20.467 "recv_buf_size": 2097152, 00:22:20.467 "send_buf_size": 2097152, 00:22:20.467 "enable_recv_pipe": true, 00:22:20.467 "enable_quickack": false, 00:22:20.467 "enable_placement_id": 0, 00:22:20.467 "enable_zerocopy_send_server": true, 00:22:20.467 "enable_zerocopy_send_client": false, 00:22:20.467 "zerocopy_threshold": 0, 00:22:20.467 "tls_version": 0, 00:22:20.467 "enable_ktls": false 00:22:20.467 } 00:22:20.467 } 00:22:20.467 ] 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "subsystem": "vmd", 00:22:20.467 "config": [] 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "subsystem": "accel", 00:22:20.467 "config": [ 00:22:20.467 { 00:22:20.467 "method": "accel_set_options", 00:22:20.467 "params": { 00:22:20.467 "small_cache_size": 128, 00:22:20.467 "large_cache_size": 16, 00:22:20.467 "task_count": 2048, 00:22:20.467 "sequence_count": 2048, 00:22:20.467 "buf_count": 2048 00:22:20.467 } 00:22:20.467 } 00:22:20.467 ] 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "subsystem": "bdev", 00:22:20.467 "config": [ 00:22:20.467 { 00:22:20.467 "method": "bdev_set_options", 00:22:20.467 "params": { 00:22:20.467 "bdev_io_pool_size": 65535, 00:22:20.467 "bdev_io_cache_size": 256, 00:22:20.467 "bdev_auto_examine": true, 00:22:20.467 "iobuf_small_cache_size": 128, 00:22:20.467 "iobuf_large_cache_size": 16 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "bdev_raid_set_options", 00:22:20.467 "params": { 00:22:20.467 "process_window_size_kb": 1024 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "bdev_iscsi_set_options", 00:22:20.467 "params": { 00:22:20.467 "timeout_sec": 30 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "bdev_nvme_set_options", 00:22:20.467 "params": { 00:22:20.467 "action_on_timeout": "none", 00:22:20.467 "timeout_us": 0, 00:22:20.467 "timeout_admin_us": 0, 00:22:20.467 "keep_alive_timeout_ms": 10000, 00:22:20.467 "arbitration_burst": 0, 00:22:20.467 "low_priority_weight": 0, 00:22:20.467 "medium_priority_weight": 0, 00:22:20.467 "high_priority_weight": 0, 00:22:20.467 "nvme_adminq_poll_period_us": 10000, 00:22:20.467 "nvme_ioq_poll_period_us": 0, 00:22:20.467 "io_queue_requests": 0, 00:22:20.467 "delay_cmd_submit": true, 00:22:20.467 "transport_retry_count": 4, 00:22:20.467 "bdev_retry_count": 3, 00:22:20.467 "transport_ack_timeout": 0, 00:22:20.467 "ctrlr_loss_timeout_sec": 0, 00:22:20.467 "reconnect_delay_sec": 0, 00:22:20.467 "fast_io_fail_timeout_sec": 0, 00:22:20.467 "disable_auto_failback": false, 00:22:20.467 "generate_uuids": false, 00:22:20.467 "transport_tos": 0, 00:22:20.467 "nvme_error_stat": false, 00:22:20.467 "rdma_srq_size": 0, 00:22:20.467 "io_path_stat": false, 00:22:20.467 "allow_accel_sequence": false, 00:22:20.467 "rdma_max_cq_size": 0, 00:22:20.467 "rdma_cm_event_timeout_ms": 0, 00:22:20.467 "dhchap_digests": [ 00:22:20.467 "sha256", 00:22:20.467 "sha384", 00:22:20.467 "sha512" 00:22:20.467 ], 00:22:20.467 "dhchap_dhgroups": [ 00:22:20.467 "null", 00:22:20.467 "ffdhe2048", 00:22:20.467 "ffdhe3072", 00:22:20.467 "ffdhe4096", 00:22:20.467 "ffdhe6144", 00:22:20.467 "ffdhe8192" 00:22:20.467 ] 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.467 "method": "bdev_nvme_set_hotplug", 00:22:20.467 "params": { 00:22:20.467 "period_us": 100000, 00:22:20.467 "enable": false 00:22:20.467 } 00:22:20.467 }, 00:22:20.467 { 00:22:20.468 "method": "bdev_malloc_create", 00:22:20.468 "params": { 00:22:20.468 "name": "malloc0", 00:22:20.468 "num_blocks": 8192, 00:22:20.468 "block_size": 4096, 00:22:20.468 "physical_block_size": 4096, 00:22:20.468 "uuid": "916a6881-4ff4-4b58-ab47-84ffe5818020", 00:22:20.468 "optimal_io_boundary": 0 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "bdev_wait_for_examine" 00:22:20.468 } 00:22:20.468 ] 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "subsystem": "nbd", 00:22:20.468 "config": [] 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "subsystem": "scheduler", 00:22:20.468 "config": [ 00:22:20.468 { 00:22:20.468 "method": "framework_set_scheduler", 00:22:20.468 "params": { 00:22:20.468 "name": "static" 00:22:20.468 } 00:22:20.468 } 00:22:20.468 ] 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "subsystem": "nvmf", 00:22:20.468 "config": [ 00:22:20.468 { 00:22:20.468 "method": "nvmf_set_config", 00:22:20.468 "params": { 00:22:20.468 "discovery_filter": "match_any", 00:22:20.468 "admin_cmd_passthru": { 00:22:20.468 "identify_ctrlr": false 00:22:20.468 } 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_set_max_subsystems", 00:22:20.468 "params": { 00:22:20.468 "max_subsystems": 1024 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_set_crdt", 00:22:20.468 "params": { 00:22:20.468 "crdt1": 0, 00:22:20.468 "crdt2": 0, 00:22:20.468 "crdt3": 0 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_create_transport", 00:22:20.468 "params": { 00:22:20.468 "trtype": "TCP", 00:22:20.468 "max_queue_depth": 128, 00:22:20.468 "max_io_qpairs_per_ctrlr": 127, 00:22:20.468 "in_capsule_data_size": 4096, 00:22:20.468 "max_io_size": 131072, 00:22:20.468 "io_unit_size": 131072, 00:22:20.468 "max_aq_depth": 128, 00:22:20.468 "num_shared_buffers": 511, 00:22:20.468 "buf_cache_size": 4294967295, 00:22:20.468 "dif_insert_or_strip": false, 00:22:20.468 "zcopy": false, 00:22:20.468 "c2h_success": false, 00:22:20.468 "sock_priority": 0, 00:22:20.468 "abort_timeout_sec": 1, 00:22:20.468 "ack_timeout": 0, 00:22:20.468 "data_wr_pool_size": 0 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_create_subsystem", 00:22:20.468 "params": { 00:22:20.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.468 "allow_any_host": false, 00:22:20.468 "serial_number": "SPDK00000000000001", 00:22:20.468 "model_number": "SPDK bdev Controller", 00:22:20.468 "max_namespaces": 10, 00:22:20.468 "min_cntlid": 1, 00:22:20.468 "max_cntlid": 65519, 00:22:20.468 "ana_reporting": false 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_subsystem_add_host", 00:22:20.468 "params": { 00:22:20.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.468 "host": "nqn.2016-06.io.spdk:host1", 00:22:20.468 "psk": "/tmp/tmp.IMYo7rCYwo" 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_subsystem_add_ns", 00:22:20.468 "params": { 00:22:20.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.468 "namespace": { 00:22:20.468 "nsid": 1, 00:22:20.468 "bdev_name": "malloc0", 00:22:20.468 "nguid": "916A68814FF44B58AB4784FFE5818020", 00:22:20.468 "uuid": "916a6881-4ff4-4b58-ab47-84ffe5818020", 00:22:20.468 "no_auto_visible": false 00:22:20.468 } 00:22:20.468 } 00:22:20.468 }, 00:22:20.468 { 00:22:20.468 "method": "nvmf_subsystem_add_listener", 00:22:20.468 "params": { 00:22:20.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.468 "listen_address": { 00:22:20.468 "trtype": "TCP", 00:22:20.468 "adrfam": "IPv4", 00:22:20.468 "traddr": "10.0.0.2", 00:22:20.468 "trsvcid": "4420" 00:22:20.468 }, 00:22:20.468 "secure_channel": true 00:22:20.468 } 00:22:20.468 } 00:22:20.468 ] 00:22:20.468 } 00:22:20.468 ] 00:22:20.468 }' 00:22:20.468 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:20.727 "subsystems": [ 00:22:20.727 { 00:22:20.727 "subsystem": "keyring", 00:22:20.727 "config": [] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "iobuf", 00:22:20.727 "config": [ 00:22:20.727 { 00:22:20.727 "method": "iobuf_set_options", 00:22:20.727 "params": { 00:22:20.727 "small_pool_count": 8192, 00:22:20.727 "large_pool_count": 1024, 00:22:20.727 "small_bufsize": 8192, 00:22:20.727 "large_bufsize": 135168 00:22:20.727 } 00:22:20.727 } 00:22:20.727 ] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "sock", 00:22:20.727 "config": [ 00:22:20.727 { 00:22:20.727 "method": "sock_set_default_impl", 00:22:20.727 "params": { 00:22:20.727 "impl_name": "posix" 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "sock_impl_set_options", 00:22:20.727 "params": { 00:22:20.727 "impl_name": "ssl", 00:22:20.727 "recv_buf_size": 4096, 00:22:20.727 "send_buf_size": 4096, 00:22:20.727 "enable_recv_pipe": true, 00:22:20.727 "enable_quickack": false, 00:22:20.727 "enable_placement_id": 0, 00:22:20.727 "enable_zerocopy_send_server": true, 00:22:20.727 "enable_zerocopy_send_client": false, 00:22:20.727 "zerocopy_threshold": 0, 00:22:20.727 "tls_version": 0, 00:22:20.727 "enable_ktls": false 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "sock_impl_set_options", 00:22:20.727 "params": { 00:22:20.727 "impl_name": "posix", 00:22:20.727 "recv_buf_size": 2097152, 00:22:20.727 "send_buf_size": 2097152, 00:22:20.727 "enable_recv_pipe": true, 00:22:20.727 "enable_quickack": false, 00:22:20.727 "enable_placement_id": 0, 00:22:20.727 "enable_zerocopy_send_server": true, 00:22:20.727 "enable_zerocopy_send_client": false, 00:22:20.727 "zerocopy_threshold": 0, 00:22:20.727 "tls_version": 0, 00:22:20.727 "enable_ktls": false 00:22:20.727 } 00:22:20.727 } 00:22:20.727 ] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "vmd", 00:22:20.727 "config": [] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "accel", 00:22:20.727 "config": [ 00:22:20.727 { 00:22:20.727 "method": "accel_set_options", 00:22:20.727 "params": { 00:22:20.727 "small_cache_size": 128, 00:22:20.727 "large_cache_size": 16, 00:22:20.727 "task_count": 2048, 00:22:20.727 "sequence_count": 2048, 00:22:20.727 "buf_count": 2048 00:22:20.727 } 00:22:20.727 } 00:22:20.727 ] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "bdev", 00:22:20.727 "config": [ 00:22:20.727 { 00:22:20.727 "method": "bdev_set_options", 00:22:20.727 "params": { 00:22:20.727 "bdev_io_pool_size": 65535, 00:22:20.727 "bdev_io_cache_size": 256, 00:22:20.727 "bdev_auto_examine": true, 00:22:20.727 "iobuf_small_cache_size": 128, 00:22:20.727 "iobuf_large_cache_size": 16 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_raid_set_options", 00:22:20.727 "params": { 00:22:20.727 "process_window_size_kb": 1024 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_iscsi_set_options", 00:22:20.727 "params": { 00:22:20.727 "timeout_sec": 30 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_nvme_set_options", 00:22:20.727 "params": { 00:22:20.727 "action_on_timeout": "none", 00:22:20.727 "timeout_us": 0, 00:22:20.727 "timeout_admin_us": 0, 00:22:20.727 "keep_alive_timeout_ms": 10000, 00:22:20.727 "arbitration_burst": 0, 00:22:20.727 "low_priority_weight": 0, 00:22:20.727 "medium_priority_weight": 0, 00:22:20.727 "high_priority_weight": 0, 00:22:20.727 "nvme_adminq_poll_period_us": 10000, 00:22:20.727 "nvme_ioq_poll_period_us": 0, 00:22:20.727 "io_queue_requests": 512, 00:22:20.727 "delay_cmd_submit": true, 00:22:20.727 "transport_retry_count": 4, 00:22:20.727 "bdev_retry_count": 3, 00:22:20.727 "transport_ack_timeout": 0, 00:22:20.727 "ctrlr_loss_timeout_sec": 0, 00:22:20.727 "reconnect_delay_sec": 0, 00:22:20.727 "fast_io_fail_timeout_sec": 0, 00:22:20.727 "disable_auto_failback": false, 00:22:20.727 "generate_uuids": false, 00:22:20.727 "transport_tos": 0, 00:22:20.727 "nvme_error_stat": false, 00:22:20.727 "rdma_srq_size": 0, 00:22:20.727 "io_path_stat": false, 00:22:20.727 "allow_accel_sequence": false, 00:22:20.727 "rdma_max_cq_size": 0, 00:22:20.727 "rdma_cm_event_timeout_ms": 0, 00:22:20.727 "dhchap_digests": [ 00:22:20.727 "sha256", 00:22:20.727 "sha384", 00:22:20.727 "sha512" 00:22:20.727 ], 00:22:20.727 "dhchap_dhgroups": [ 00:22:20.727 "null", 00:22:20.727 "ffdhe2048", 00:22:20.727 "ffdhe3072", 00:22:20.727 "ffdhe4096", 00:22:20.727 "ffdhe6144", 00:22:20.727 "ffdhe8192" 00:22:20.727 ] 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_nvme_attach_controller", 00:22:20.727 "params": { 00:22:20.727 "name": "TLSTEST", 00:22:20.727 "trtype": "TCP", 00:22:20.727 "adrfam": "IPv4", 00:22:20.727 "traddr": "10.0.0.2", 00:22:20.727 "trsvcid": "4420", 00:22:20.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.727 "prchk_reftag": false, 00:22:20.727 "prchk_guard": false, 00:22:20.727 "ctrlr_loss_timeout_sec": 0, 00:22:20.727 "reconnect_delay_sec": 0, 00:22:20.727 "fast_io_fail_timeout_sec": 0, 00:22:20.727 "psk": "/tmp/tmp.IMYo7rCYwo", 00:22:20.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.727 "hdgst": false, 00:22:20.727 "ddgst": false 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_nvme_set_hotplug", 00:22:20.727 "params": { 00:22:20.727 "period_us": 100000, 00:22:20.727 "enable": false 00:22:20.727 } 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "method": "bdev_wait_for_examine" 00:22:20.727 } 00:22:20.727 ] 00:22:20.727 }, 00:22:20.727 { 00:22:20.727 "subsystem": "nbd", 00:22:20.727 "config": [] 00:22:20.727 } 00:22:20.727 ] 00:22:20.727 }' 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1671142 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1671142 ']' 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1671142 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671142 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671142' 00:22:20.727 killing process with pid 1671142 00:22:20.727 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1671142 00:22:20.727 Received shutdown signal, test time was about 10.000000 seconds 00:22:20.727 00:22:20.728 Latency(us) 00:22:20.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.728 =================================================================================================================== 00:22:20.728 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:20.728 [2024-07-15 19:28:31.432986] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:20.728 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1671142 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1670891 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1670891 ']' 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1670891 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1670891 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1670891' 00:22:20.987 killing process with pid 1670891 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1670891 00:22:20.987 [2024-07-15 19:28:31.652952] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1670891 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.987 19:28:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:20.987 "subsystems": [ 00:22:20.987 { 00:22:20.987 "subsystem": "keyring", 00:22:20.987 "config": [] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "iobuf", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "iobuf_set_options", 00:22:20.987 "params": { 00:22:20.987 "small_pool_count": 8192, 00:22:20.987 "large_pool_count": 1024, 00:22:20.987 "small_bufsize": 8192, 00:22:20.987 "large_bufsize": 135168 00:22:20.987 } 00:22:20.987 } 00:22:20.987 ] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "sock", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "sock_set_default_impl", 00:22:20.987 "params": { 00:22:20.987 "impl_name": "posix" 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "sock_impl_set_options", 00:22:20.987 "params": { 00:22:20.987 "impl_name": "ssl", 00:22:20.987 "recv_buf_size": 4096, 00:22:20.987 "send_buf_size": 4096, 00:22:20.987 "enable_recv_pipe": true, 00:22:20.987 "enable_quickack": false, 00:22:20.987 "enable_placement_id": 0, 00:22:20.987 "enable_zerocopy_send_server": true, 00:22:20.987 "enable_zerocopy_send_client": false, 00:22:20.987 "zerocopy_threshold": 0, 00:22:20.987 "tls_version": 0, 00:22:20.987 "enable_ktls": false 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "sock_impl_set_options", 00:22:20.987 "params": { 00:22:20.987 "impl_name": "posix", 00:22:20.987 "recv_buf_size": 2097152, 00:22:20.987 "send_buf_size": 2097152, 00:22:20.987 "enable_recv_pipe": true, 00:22:20.987 "enable_quickack": false, 00:22:20.987 "enable_placement_id": 0, 00:22:20.987 "enable_zerocopy_send_server": true, 00:22:20.987 "enable_zerocopy_send_client": false, 00:22:20.987 "zerocopy_threshold": 0, 00:22:20.987 "tls_version": 0, 00:22:20.987 "enable_ktls": false 00:22:20.987 } 00:22:20.987 } 00:22:20.987 ] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "vmd", 00:22:20.987 "config": [] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "accel", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "accel_set_options", 00:22:20.987 "params": { 00:22:20.987 "small_cache_size": 128, 00:22:20.987 "large_cache_size": 16, 00:22:20.987 "task_count": 2048, 00:22:20.987 "sequence_count": 2048, 00:22:20.987 "buf_count": 2048 00:22:20.987 } 00:22:20.987 } 00:22:20.987 ] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "bdev", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "bdev_set_options", 00:22:20.987 "params": { 00:22:20.987 "bdev_io_pool_size": 65535, 00:22:20.987 "bdev_io_cache_size": 256, 00:22:20.987 "bdev_auto_examine": true, 00:22:20.987 "iobuf_small_cache_size": 128, 00:22:20.987 "iobuf_large_cache_size": 16 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_raid_set_options", 00:22:20.987 "params": { 00:22:20.987 "process_window_size_kb": 1024 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_iscsi_set_options", 00:22:20.987 "params": { 00:22:20.987 "timeout_sec": 30 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_nvme_set_options", 00:22:20.987 "params": { 00:22:20.987 "action_on_timeout": "none", 00:22:20.987 "timeout_us": 0, 00:22:20.987 "timeout_admin_us": 0, 00:22:20.987 "keep_alive_timeout_ms": 10000, 00:22:20.987 "arbitration_burst": 0, 00:22:20.987 "low_priority_weight": 0, 00:22:20.987 "medium_priority_weight": 0, 00:22:20.987 "high_priority_weight": 0, 00:22:20.987 "nvme_adminq_poll_period_us": 10000, 00:22:20.987 "nvme_ioq_poll_period_us": 0, 00:22:20.987 "io_queue_requests": 0, 00:22:20.987 "delay_cmd_submit": true, 00:22:20.987 "transport_retry_count": 4, 00:22:20.987 "bdev_retry_count": 3, 00:22:20.987 "transport_ack_timeout": 0, 00:22:20.987 "ctrlr_loss_timeout_sec": 0, 00:22:20.987 "reconnect_delay_sec": 0, 00:22:20.987 "fast_io_fail_timeout_sec": 0, 00:22:20.987 "disable_auto_failback": false, 00:22:20.987 "generate_uuids": false, 00:22:20.987 "transport_tos": 0, 00:22:20.987 "nvme_error_stat": false, 00:22:20.987 "rdma_srq_size": 0, 00:22:20.987 "io_path_stat": false, 00:22:20.987 "allow_accel_sequence": false, 00:22:20.987 "rdma_max_cq_size": 0, 00:22:20.987 "rdma_cm_event_timeout_ms": 0, 00:22:20.987 "dhchap_digests": [ 00:22:20.987 "sha256", 00:22:20.987 "sha384", 00:22:20.987 "sha512" 00:22:20.987 ], 00:22:20.987 "dhchap_dhgroups": [ 00:22:20.987 "null", 00:22:20.987 "ffdhe2048", 00:22:20.987 "ffdhe3072", 00:22:20.987 "ffdhe4096", 00:22:20.987 "ffdhe6144", 00:22:20.987 "ffdhe8192" 00:22:20.987 ] 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_nvme_set_hotplug", 00:22:20.987 "params": { 00:22:20.987 "period_us": 100000, 00:22:20.987 "enable": false 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_malloc_create", 00:22:20.987 "params": { 00:22:20.987 "name": "malloc0", 00:22:20.987 "num_blocks": 8192, 00:22:20.987 "block_size": 4096, 00:22:20.987 "physical_block_size": 4096, 00:22:20.987 "uuid": "916a6881-4ff4-4b58-ab47-84ffe5818020", 00:22:20.987 "optimal_io_boundary": 0 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "bdev_wait_for_examine" 00:22:20.987 } 00:22:20.987 ] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "nbd", 00:22:20.987 "config": [] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "scheduler", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "framework_set_scheduler", 00:22:20.987 "params": { 00:22:20.987 "name": "static" 00:22:20.987 } 00:22:20.987 } 00:22:20.987 ] 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "subsystem": "nvmf", 00:22:20.987 "config": [ 00:22:20.987 { 00:22:20.987 "method": "nvmf_set_config", 00:22:20.987 "params": { 00:22:20.987 "discovery_filter": "match_any", 00:22:20.987 "admin_cmd_passthru": { 00:22:20.987 "identify_ctrlr": false 00:22:20.987 } 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_set_max_subsystems", 00:22:20.987 "params": { 00:22:20.987 "max_subsystems": 1024 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_set_crdt", 00:22:20.987 "params": { 00:22:20.987 "crdt1": 0, 00:22:20.987 "crdt2": 0, 00:22:20.987 "crdt3": 0 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_create_transport", 00:22:20.987 "params": { 00:22:20.987 "trtype": "TCP", 00:22:20.987 "max_queue_depth": 128, 00:22:20.987 "max_io_qpairs_per_ctrlr": 127, 00:22:20.987 "in_capsule_data_size": 4096, 00:22:20.987 "max_io_size": 131072, 00:22:20.987 "io_unit_size": 131072, 00:22:20.987 "max_aq_depth": 128, 00:22:20.987 "num_shared_buffers": 511, 00:22:20.987 "buf_cache_size": 4294967295, 00:22:20.987 "dif_insert_or_strip": false, 00:22:20.987 "zcopy": false, 00:22:20.987 "c2h_success": false, 00:22:20.987 "sock_priority": 0, 00:22:20.987 "abort_timeout_sec": 1, 00:22:20.987 "ack_timeout": 0, 00:22:20.987 "data_wr_pool_size": 0 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_create_subsystem", 00:22:20.987 "params": { 00:22:20.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.987 "allow_any_host": false, 00:22:20.987 "serial_number": "SPDK00000000000001", 00:22:20.987 "model_number": "SPDK bdev Controller", 00:22:20.987 "max_namespaces": 10, 00:22:20.987 "min_cntlid": 1, 00:22:20.987 "max_cntlid": 65519, 00:22:20.987 "ana_reporting": false 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_subsystem_add_host", 00:22:20.987 "params": { 00:22:20.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.987 "host": "nqn.2016-06.io.spdk:host1", 00:22:20.987 "psk": "/tmp/tmp.IMYo7rCYwo" 00:22:20.987 } 00:22:20.987 }, 00:22:20.987 { 00:22:20.987 "method": "nvmf_subsystem_add_ns", 00:22:20.987 "params": { 00:22:20.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.987 "namespace": { 00:22:20.987 "nsid": 1, 00:22:20.987 "bdev_name": "malloc0", 00:22:20.987 "nguid": "916A68814FF44B58AB4784FFE5818020", 00:22:20.987 "uuid": "916a6881-4ff4-4b58-ab47-84ffe5818020", 00:22:20.987 "no_auto_visible": false 00:22:20.987 } 00:22:20.987 } 00:22:20.987 }, 00:22:20.988 { 00:22:20.988 "method": "nvmf_subsystem_add_listener", 00:22:20.988 "params": { 00:22:20.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.988 "listen_address": { 00:22:20.988 "trtype": "TCP", 00:22:20.988 "adrfam": "IPv4", 00:22:20.988 "traddr": "10.0.0.2", 00:22:20.988 "trsvcid": "4420" 00:22:20.988 }, 00:22:20.988 "secure_channel": true 00:22:20.988 } 00:22:20.988 } 00:22:20.988 ] 00:22:20.988 } 00:22:20.988 ] 00:22:20.988 }' 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1671379 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1671379 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1671379 ']' 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.245 19:28:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.245 [2024-07-15 19:28:31.891969] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:21.245 [2024-07-15 19:28:31.892015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.245 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.245 [2024-07-15 19:28:31.921132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.245 [2024-07-15 19:28:31.948351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.245 [2024-07-15 19:28:31.988260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.245 [2024-07-15 19:28:31.988299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.245 [2024-07-15 19:28:31.988306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.245 [2024-07-15 19:28:31.988312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.245 [2024-07-15 19:28:31.988317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.245 [2024-07-15 19:28:31.988373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.503 [2024-07-15 19:28:32.184933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.503 [2024-07-15 19:28:32.200902] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:21.503 [2024-07-15 19:28:32.216963] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.503 [2024-07-15 19:28:32.224457] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1671416 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1671416 /var/tmp/bdevperf.sock 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1671416 ']' 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:22.128 "subsystems": [ 00:22:22.128 { 00:22:22.128 "subsystem": "keyring", 00:22:22.128 "config": [] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "iobuf", 00:22:22.128 "config": [ 00:22:22.128 { 00:22:22.128 "method": "iobuf_set_options", 00:22:22.128 "params": { 00:22:22.128 "small_pool_count": 8192, 00:22:22.128 "large_pool_count": 1024, 00:22:22.128 "small_bufsize": 8192, 00:22:22.128 "large_bufsize": 135168 00:22:22.128 } 00:22:22.128 } 00:22:22.128 ] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "sock", 00:22:22.128 "config": [ 00:22:22.128 { 00:22:22.128 "method": "sock_set_default_impl", 00:22:22.128 "params": { 00:22:22.128 "impl_name": "posix" 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "sock_impl_set_options", 00:22:22.128 "params": { 00:22:22.128 "impl_name": "ssl", 00:22:22.128 "recv_buf_size": 4096, 00:22:22.128 "send_buf_size": 4096, 00:22:22.128 "enable_recv_pipe": true, 00:22:22.128 "enable_quickack": false, 00:22:22.128 "enable_placement_id": 0, 00:22:22.128 "enable_zerocopy_send_server": true, 00:22:22.128 "enable_zerocopy_send_client": false, 00:22:22.128 "zerocopy_threshold": 0, 00:22:22.128 "tls_version": 0, 00:22:22.128 "enable_ktls": false 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "sock_impl_set_options", 00:22:22.128 "params": { 00:22:22.128 "impl_name": "posix", 00:22:22.128 "recv_buf_size": 2097152, 00:22:22.128 "send_buf_size": 2097152, 00:22:22.128 "enable_recv_pipe": true, 00:22:22.128 "enable_quickack": false, 00:22:22.128 "enable_placement_id": 0, 00:22:22.128 "enable_zerocopy_send_server": true, 00:22:22.128 "enable_zerocopy_send_client": false, 00:22:22.128 "zerocopy_threshold": 0, 00:22:22.128 "tls_version": 0, 00:22:22.128 "enable_ktls": false 00:22:22.128 } 00:22:22.128 } 00:22:22.128 ] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "vmd", 00:22:22.128 "config": [] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "accel", 00:22:22.128 "config": [ 00:22:22.128 { 00:22:22.128 "method": "accel_set_options", 00:22:22.128 "params": { 00:22:22.128 "small_cache_size": 128, 00:22:22.128 "large_cache_size": 16, 00:22:22.128 "task_count": 2048, 00:22:22.128 "sequence_count": 2048, 00:22:22.128 "buf_count": 2048 00:22:22.128 } 00:22:22.128 } 00:22:22.128 ] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "bdev", 00:22:22.128 "config": [ 00:22:22.128 { 00:22:22.128 "method": "bdev_set_options", 00:22:22.128 "params": { 00:22:22.128 "bdev_io_pool_size": 65535, 00:22:22.128 "bdev_io_cache_size": 256, 00:22:22.128 "bdev_auto_examine": true, 00:22:22.128 "iobuf_small_cache_size": 128, 00:22:22.128 "iobuf_large_cache_size": 16 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_raid_set_options", 00:22:22.128 "params": { 00:22:22.128 "process_window_size_kb": 1024 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_iscsi_set_options", 00:22:22.128 "params": { 00:22:22.128 "timeout_sec": 30 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_nvme_set_options", 00:22:22.128 "params": { 00:22:22.128 "action_on_timeout": "none", 00:22:22.128 "timeout_us": 0, 00:22:22.128 "timeout_admin_us": 0, 00:22:22.128 "keep_alive_timeout_ms": 10000, 00:22:22.128 "arbitration_burst": 0, 00:22:22.128 "low_priority_weight": 0, 00:22:22.128 "medium_priority_weight": 0, 00:22:22.128 "high_priority_weight": 0, 00:22:22.128 "nvme_adminq_poll_period_us": 10000, 00:22:22.128 "nvme_ioq_poll_period_us": 0, 00:22:22.128 "io_queue_requests": 512, 00:22:22.128 "delay_cmd_submit": true, 00:22:22.128 "transport_retry_count": 4, 00:22:22.128 "bdev_retry_count": 3, 00:22:22.128 "transport_ack_timeout": 0, 00:22:22.128 "ctrlr_loss_timeout_sec": 0, 00:22:22.128 "reconnect_delay_sec": 0, 00:22:22.128 "fast_io_fail_timeout_sec": 0, 00:22:22.128 "disable_auto_failback": false, 00:22:22.128 "generate_uuids": false, 00:22:22.128 "transport_tos": 0, 00:22:22.128 "nvme_error_stat": false, 00:22:22.128 "rdma_srq_size": 0, 00:22:22.128 "io_path_stat": false, 00:22:22.128 "allow_accel_sequence": false, 00:22:22.128 "rdma_max_cq_size": 0, 00:22:22.128 "rdma_cm_event_timeout_ms": 0, 00:22:22.128 "dhchap_digests": [ 00:22:22.128 "sha256", 00:22:22.128 "sha384", 00:22:22.128 "sha512" 00:22:22.128 ], 00:22:22.128 "dhchap_dhgroups": [ 00:22:22.128 "null", 00:22:22.128 "ffdhe2048", 00:22:22.128 "ffdhe3072", 00:22:22.128 "ffdhe4096", 00:22:22.128 "ffdhe6144", 00:22:22.128 "ffdhe8192" 00:22:22.128 ] 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_nvme_attach_controller", 00:22:22.128 "params": { 00:22:22.128 "name": "TLSTEST", 00:22:22.128 "trtype": "TCP", 00:22:22.128 "adrfam": "IPv4", 00:22:22.128 "traddr": "10.0.0.2", 00:22:22.128 "trsvcid": "4420", 00:22:22.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.128 "prchk_reftag": false, 00:22:22.128 "prchk_guard": false, 00:22:22.128 "ctrlr_loss_timeout_sec": 0, 00:22:22.128 "reconnect_delay_sec": 0, 00:22:22.128 "fast_io_fail_timeout_sec": 0, 00:22:22.128 "psk": "/tmp/tmp.IMYo7rCYwo", 00:22:22.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.128 "hdgst": false, 00:22:22.128 "ddgst": false 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_nvme_set_hotplug", 00:22:22.128 "params": { 00:22:22.128 "period_us": 100000, 00:22:22.128 "enable": false 00:22:22.128 } 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "method": "bdev_wait_for_examine" 00:22:22.128 } 00:22:22.128 ] 00:22:22.128 }, 00:22:22.128 { 00:22:22.128 "subsystem": "nbd", 00:22:22.128 "config": [] 00:22:22.128 } 00:22:22.128 ] 00:22:22.128 }' 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.128 19:28:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.128 [2024-07-15 19:28:32.770707] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:22.128 [2024-07-15 19:28:32.770756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671416 ] 00:22:22.128 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.128 [2024-07-15 19:28:32.797407] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.128 [2024-07-15 19:28:32.821245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.128 [2024-07-15 19:28:32.862377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.386 [2024-07-15 19:28:32.999640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.386 [2024-07-15 19:28:32.999713] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.959 19:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.959 19:28:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.959 19:28:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:22.959 Running I/O for 10 seconds... 00:22:32.934 00:22:32.934 Latency(us) 00:22:32.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.934 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.934 Verification LBA range: start 0x0 length 0x2000 00:22:32.934 TLSTESTn1 : 10.03 4995.93 19.52 0.00 0.00 25576.41 6867.03 54708.31 00:22:32.934 =================================================================================================================== 00:22:32.934 Total : 4995.93 19.52 0.00 0.00 25576.41 6867.03 54708.31 00:22:32.934 0 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1671416 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1671416 ']' 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1671416 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671416 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:32.934 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671416' 00:22:33.193 killing process with pid 1671416 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1671416 00:22:33.193 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.193 00:22:33.193 Latency(us) 00:22:33.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.193 =================================================================================================================== 00:22:33.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.193 [2024-07-15 19:28:43.789833] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1671416 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1671379 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1671379 ']' 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1671379 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.193 19:28:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671379 00:22:33.193 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:33.193 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:33.193 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671379' 00:22:33.193 killing process with pid 1671379 00:22:33.193 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1671379 00:22:33.193 [2024-07-15 19:28:44.005171] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:33.193 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1671379 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1673260 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1673260 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1673260 ']' 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.452 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.452 [2024-07-15 19:28:44.239653] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:33.452 [2024-07-15 19:28:44.239702] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.452 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.452 [2024-07-15 19:28:44.270001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.452 [2024-07-15 19:28:44.298441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.711 [2024-07-15 19:28:44.336832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.711 [2024-07-15 19:28:44.336871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.711 [2024-07-15 19:28:44.336879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.711 [2024-07-15 19:28:44.336885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.711 [2024-07-15 19:28:44.336889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.711 [2024-07-15 19:28:44.336912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IMYo7rCYwo 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IMYo7rCYwo 00:22:33.711 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:33.970 [2024-07-15 19:28:44.617615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.970 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:33.970 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:34.229 [2024-07-15 19:28:44.958486] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.229 [2024-07-15 19:28:44.958696] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.229 19:28:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:34.489 malloc0 00:22:34.489 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:34.489 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IMYo7rCYwo 00:22:34.749 [2024-07-15 19:28:45.459857] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1673513 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1673513 /var/tmp/bdevperf.sock 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1673513 ']' 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.749 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.749 [2024-07-15 19:28:45.520669] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:34.749 [2024-07-15 19:28:45.520715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1673513 ] 00:22:34.749 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.749 [2024-07-15 19:28:45.545668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:34.749 [2024-07-15 19:28:45.573705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.009 [2024-07-15 19:28:45.614575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.009 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.009 19:28:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:35.009 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMYo7rCYwo 00:22:35.317 19:28:45 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:35.317 [2024-07-15 19:28:46.031900] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.317 nvme0n1 00:22:35.317 19:28:46 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.585 Running I/O for 1 seconds... 00:22:36.522 00:22:36.522 Latency(us) 00:22:36.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.522 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.522 Verification LBA range: start 0x0 length 0x2000 00:22:36.522 nvme0n1 : 1.01 5103.61 19.94 0.00 0.00 24881.53 4729.99 43082.80 00:22:36.522 =================================================================================================================== 00:22:36.522 Total : 5103.61 19.94 0.00 0.00 24881.53 4729.99 43082.80 00:22:36.522 0 00:22:36.522 19:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1673513 00:22:36.522 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1673513 ']' 00:22:36.522 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1673513 00:22:36.522 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.522 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673513 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673513' 00:22:36.523 killing process with pid 1673513 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1673513 00:22:36.523 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.523 00:22:36.523 Latency(us) 00:22:36.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.523 =================================================================================================================== 00:22:36.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.523 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1673513 00:22:36.781 19:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1673260 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1673260 ']' 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1673260 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673260 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673260' 00:22:36.782 killing process with pid 1673260 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1673260 00:22:36.782 [2024-07-15 19:28:47.487737] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:36.782 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1673260 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1673977 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1673977 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1673977 ']' 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.041 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.041 [2024-07-15 19:28:47.724532] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:37.041 [2024-07-15 19:28:47.724577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.041 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.041 [2024-07-15 19:28:47.754124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:37.041 [2024-07-15 19:28:47.781143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.041 [2024-07-15 19:28:47.817143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.041 [2024-07-15 19:28:47.817182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.041 [2024-07-15 19:28:47.817190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.041 [2024-07-15 19:28:47.817196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.041 [2024-07-15 19:28:47.817204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.041 [2024-07-15 19:28:47.817222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.300 19:28:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.300 [2024-07-15 19:28:47.945491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.300 malloc0 00:22:37.300 [2024-07-15 19:28:47.973793] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.300 [2024-07-15 19:28:47.973984] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1674000 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1674000 /var/tmp/bdevperf.sock 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1674000 ']' 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.300 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.300 [2024-07-15 19:28:48.045983] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:37.300 [2024-07-15 19:28:48.046024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674000 ] 00:22:37.300 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.301 [2024-07-15 19:28:48.072337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:37.301 [2024-07-15 19:28:48.099720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.301 [2024-07-15 19:28:48.139479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.559 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.559 19:28:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.559 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IMYo7rCYwo 00:22:37.559 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:37.817 [2024-07-15 19:28:48.567965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.817 nvme0n1 00:22:37.817 19:28:48 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.075 Running I/O for 1 seconds... 00:22:39.012 00:22:39.012 Latency(us) 00:22:39.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:39.012 Verification LBA range: start 0x0 length 0x2000 00:22:39.012 nvme0n1 : 1.01 5328.65 20.82 0.00 0.00 23824.37 6012.22 35104.50 00:22:39.012 =================================================================================================================== 00:22:39.012 Total : 5328.65 20.82 0.00 0.00 23824.37 6012.22 35104.50 00:22:39.012 0 00:22:39.012 19:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:39.012 19:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.012 19:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.271 19:28:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.271 19:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:39.271 "subsystems": [ 00:22:39.271 { 00:22:39.271 "subsystem": "keyring", 00:22:39.271 "config": [ 00:22:39.271 { 00:22:39.271 "method": "keyring_file_add_key", 00:22:39.271 "params": { 00:22:39.271 "name": "key0", 00:22:39.271 "path": "/tmp/tmp.IMYo7rCYwo" 00:22:39.271 } 00:22:39.271 } 00:22:39.271 ] 00:22:39.271 }, 00:22:39.271 { 00:22:39.271 "subsystem": "iobuf", 00:22:39.271 "config": [ 00:22:39.271 { 00:22:39.271 "method": "iobuf_set_options", 00:22:39.271 "params": { 00:22:39.271 "small_pool_count": 8192, 00:22:39.271 "large_pool_count": 1024, 00:22:39.271 "small_bufsize": 8192, 00:22:39.271 "large_bufsize": 135168 00:22:39.271 } 00:22:39.271 } 00:22:39.271 ] 00:22:39.271 }, 00:22:39.271 { 00:22:39.271 "subsystem": "sock", 00:22:39.271 "config": [ 00:22:39.271 { 00:22:39.271 "method": "sock_set_default_impl", 00:22:39.271 "params": { 00:22:39.271 "impl_name": "posix" 00:22:39.271 } 00:22:39.271 }, 00:22:39.271 { 00:22:39.271 "method": "sock_impl_set_options", 00:22:39.271 "params": { 00:22:39.271 "impl_name": "ssl", 00:22:39.271 "recv_buf_size": 4096, 00:22:39.271 "send_buf_size": 4096, 00:22:39.271 "enable_recv_pipe": true, 00:22:39.271 "enable_quickack": false, 00:22:39.271 "enable_placement_id": 0, 00:22:39.271 "enable_zerocopy_send_server": true, 00:22:39.271 "enable_zerocopy_send_client": false, 00:22:39.271 "zerocopy_threshold": 0, 00:22:39.271 "tls_version": 0, 00:22:39.271 "enable_ktls": false 00:22:39.271 } 00:22:39.271 }, 00:22:39.271 { 00:22:39.271 "method": "sock_impl_set_options", 00:22:39.271 "params": { 00:22:39.271 "impl_name": "posix", 00:22:39.271 "recv_buf_size": 2097152, 00:22:39.271 "send_buf_size": 2097152, 00:22:39.271 "enable_recv_pipe": true, 00:22:39.272 "enable_quickack": false, 00:22:39.272 "enable_placement_id": 0, 00:22:39.272 "enable_zerocopy_send_server": true, 00:22:39.272 "enable_zerocopy_send_client": false, 00:22:39.272 "zerocopy_threshold": 0, 00:22:39.272 "tls_version": 0, 00:22:39.272 "enable_ktls": false 00:22:39.272 } 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "vmd", 00:22:39.272 "config": [] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "accel", 00:22:39.272 "config": [ 00:22:39.272 { 00:22:39.272 "method": "accel_set_options", 00:22:39.272 "params": { 00:22:39.272 "small_cache_size": 128, 00:22:39.272 "large_cache_size": 16, 00:22:39.272 "task_count": 2048, 00:22:39.272 "sequence_count": 2048, 00:22:39.272 "buf_count": 2048 00:22:39.272 } 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "bdev", 00:22:39.272 "config": [ 00:22:39.272 { 00:22:39.272 "method": "bdev_set_options", 00:22:39.272 "params": { 00:22:39.272 "bdev_io_pool_size": 65535, 00:22:39.272 "bdev_io_cache_size": 256, 00:22:39.272 "bdev_auto_examine": true, 00:22:39.272 "iobuf_small_cache_size": 128, 00:22:39.272 "iobuf_large_cache_size": 16 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_raid_set_options", 00:22:39.272 "params": { 00:22:39.272 "process_window_size_kb": 1024 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_iscsi_set_options", 00:22:39.272 "params": { 00:22:39.272 "timeout_sec": 30 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_nvme_set_options", 00:22:39.272 "params": { 00:22:39.272 "action_on_timeout": "none", 00:22:39.272 "timeout_us": 0, 00:22:39.272 "timeout_admin_us": 0, 00:22:39.272 "keep_alive_timeout_ms": 10000, 00:22:39.272 "arbitration_burst": 0, 00:22:39.272 "low_priority_weight": 0, 00:22:39.272 "medium_priority_weight": 0, 00:22:39.272 "high_priority_weight": 0, 00:22:39.272 "nvme_adminq_poll_period_us": 10000, 00:22:39.272 "nvme_ioq_poll_period_us": 0, 00:22:39.272 "io_queue_requests": 0, 00:22:39.272 "delay_cmd_submit": true, 00:22:39.272 "transport_retry_count": 4, 00:22:39.272 "bdev_retry_count": 3, 00:22:39.272 "transport_ack_timeout": 0, 00:22:39.272 "ctrlr_loss_timeout_sec": 0, 00:22:39.272 "reconnect_delay_sec": 0, 00:22:39.272 "fast_io_fail_timeout_sec": 0, 00:22:39.272 "disable_auto_failback": false, 00:22:39.272 "generate_uuids": false, 00:22:39.272 "transport_tos": 0, 00:22:39.272 "nvme_error_stat": false, 00:22:39.272 "rdma_srq_size": 0, 00:22:39.272 "io_path_stat": false, 00:22:39.272 "allow_accel_sequence": false, 00:22:39.272 "rdma_max_cq_size": 0, 00:22:39.272 "rdma_cm_event_timeout_ms": 0, 00:22:39.272 "dhchap_digests": [ 00:22:39.272 "sha256", 00:22:39.272 "sha384", 00:22:39.272 "sha512" 00:22:39.272 ], 00:22:39.272 "dhchap_dhgroups": [ 00:22:39.272 "null", 00:22:39.272 "ffdhe2048", 00:22:39.272 "ffdhe3072", 00:22:39.272 "ffdhe4096", 00:22:39.272 "ffdhe6144", 00:22:39.272 "ffdhe8192" 00:22:39.272 ] 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_nvme_set_hotplug", 00:22:39.272 "params": { 00:22:39.272 "period_us": 100000, 00:22:39.272 "enable": false 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_malloc_create", 00:22:39.272 "params": { 00:22:39.272 "name": "malloc0", 00:22:39.272 "num_blocks": 8192, 00:22:39.272 "block_size": 4096, 00:22:39.272 "physical_block_size": 4096, 00:22:39.272 "uuid": "11ef6452-f8dd-4a25-a34d-422f22ef8c58", 00:22:39.272 "optimal_io_boundary": 0 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "bdev_wait_for_examine" 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "nbd", 00:22:39.272 "config": [] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "scheduler", 00:22:39.272 "config": [ 00:22:39.272 { 00:22:39.272 "method": "framework_set_scheduler", 00:22:39.272 "params": { 00:22:39.272 "name": "static" 00:22:39.272 } 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "subsystem": "nvmf", 00:22:39.272 "config": [ 00:22:39.272 { 00:22:39.272 "method": "nvmf_set_config", 00:22:39.272 "params": { 00:22:39.272 "discovery_filter": "match_any", 00:22:39.272 "admin_cmd_passthru": { 00:22:39.272 "identify_ctrlr": false 00:22:39.272 } 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_set_max_subsystems", 00:22:39.272 "params": { 00:22:39.272 "max_subsystems": 1024 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_set_crdt", 00:22:39.272 "params": { 00:22:39.272 "crdt1": 0, 00:22:39.272 "crdt2": 0, 00:22:39.272 "crdt3": 0 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_create_transport", 00:22:39.272 "params": { 00:22:39.272 "trtype": "TCP", 00:22:39.272 "max_queue_depth": 128, 00:22:39.272 "max_io_qpairs_per_ctrlr": 127, 00:22:39.272 "in_capsule_data_size": 4096, 00:22:39.272 "max_io_size": 131072, 00:22:39.272 "io_unit_size": 131072, 00:22:39.272 "max_aq_depth": 128, 00:22:39.272 "num_shared_buffers": 511, 00:22:39.272 "buf_cache_size": 4294967295, 00:22:39.272 "dif_insert_or_strip": false, 00:22:39.272 "zcopy": false, 00:22:39.272 "c2h_success": false, 00:22:39.272 "sock_priority": 0, 00:22:39.272 "abort_timeout_sec": 1, 00:22:39.272 "ack_timeout": 0, 00:22:39.272 "data_wr_pool_size": 0 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_create_subsystem", 00:22:39.272 "params": { 00:22:39.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.272 "allow_any_host": false, 00:22:39.272 "serial_number": "00000000000000000000", 00:22:39.272 "model_number": "SPDK bdev Controller", 00:22:39.272 "max_namespaces": 32, 00:22:39.272 "min_cntlid": 1, 00:22:39.272 "max_cntlid": 65519, 00:22:39.272 "ana_reporting": false 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_subsystem_add_host", 00:22:39.272 "params": { 00:22:39.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.272 "host": "nqn.2016-06.io.spdk:host1", 00:22:39.272 "psk": "key0" 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_subsystem_add_ns", 00:22:39.272 "params": { 00:22:39.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.272 "namespace": { 00:22:39.272 "nsid": 1, 00:22:39.272 "bdev_name": "malloc0", 00:22:39.272 "nguid": "11EF6452F8DD4A25A34D422F22EF8C58", 00:22:39.272 "uuid": "11ef6452-f8dd-4a25-a34d-422f22ef8c58", 00:22:39.272 "no_auto_visible": false 00:22:39.272 } 00:22:39.272 } 00:22:39.272 }, 00:22:39.272 { 00:22:39.272 "method": "nvmf_subsystem_add_listener", 00:22:39.272 "params": { 00:22:39.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.272 "listen_address": { 00:22:39.272 "trtype": "TCP", 00:22:39.272 "adrfam": "IPv4", 00:22:39.272 "traddr": "10.0.0.2", 00:22:39.272 "trsvcid": "4420" 00:22:39.272 }, 00:22:39.272 "secure_channel": false, 00:22:39.272 "sock_impl": "ssl" 00:22:39.272 } 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 } 00:22:39.272 ] 00:22:39.272 }' 00:22:39.272 19:28:49 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:39.532 19:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:39.532 "subsystems": [ 00:22:39.532 { 00:22:39.532 "subsystem": "keyring", 00:22:39.532 "config": [ 00:22:39.532 { 00:22:39.532 "method": "keyring_file_add_key", 00:22:39.532 "params": { 00:22:39.532 "name": "key0", 00:22:39.532 "path": "/tmp/tmp.IMYo7rCYwo" 00:22:39.532 } 00:22:39.532 } 00:22:39.532 ] 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "subsystem": "iobuf", 00:22:39.532 "config": [ 00:22:39.532 { 00:22:39.532 "method": "iobuf_set_options", 00:22:39.532 "params": { 00:22:39.532 "small_pool_count": 8192, 00:22:39.532 "large_pool_count": 1024, 00:22:39.532 "small_bufsize": 8192, 00:22:39.532 "large_bufsize": 135168 00:22:39.532 } 00:22:39.532 } 00:22:39.532 ] 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "subsystem": "sock", 00:22:39.532 "config": [ 00:22:39.532 { 00:22:39.532 "method": "sock_set_default_impl", 00:22:39.532 "params": { 00:22:39.532 "impl_name": "posix" 00:22:39.532 } 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "method": "sock_impl_set_options", 00:22:39.532 "params": { 00:22:39.532 "impl_name": "ssl", 00:22:39.532 "recv_buf_size": 4096, 00:22:39.532 "send_buf_size": 4096, 00:22:39.532 "enable_recv_pipe": true, 00:22:39.532 "enable_quickack": false, 00:22:39.532 "enable_placement_id": 0, 00:22:39.532 "enable_zerocopy_send_server": true, 00:22:39.532 "enable_zerocopy_send_client": false, 00:22:39.532 "zerocopy_threshold": 0, 00:22:39.532 "tls_version": 0, 00:22:39.532 "enable_ktls": false 00:22:39.532 } 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "method": "sock_impl_set_options", 00:22:39.532 "params": { 00:22:39.532 "impl_name": "posix", 00:22:39.532 "recv_buf_size": 2097152, 00:22:39.532 "send_buf_size": 2097152, 00:22:39.532 "enable_recv_pipe": true, 00:22:39.532 "enable_quickack": false, 00:22:39.532 "enable_placement_id": 0, 00:22:39.532 "enable_zerocopy_send_server": true, 00:22:39.532 "enable_zerocopy_send_client": false, 00:22:39.532 "zerocopy_threshold": 0, 00:22:39.532 "tls_version": 0, 00:22:39.532 "enable_ktls": false 00:22:39.532 } 00:22:39.532 } 00:22:39.532 ] 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "subsystem": "vmd", 00:22:39.532 "config": [] 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "subsystem": "accel", 00:22:39.532 "config": [ 00:22:39.532 { 00:22:39.532 "method": "accel_set_options", 00:22:39.532 "params": { 00:22:39.532 "small_cache_size": 128, 00:22:39.532 "large_cache_size": 16, 00:22:39.532 "task_count": 2048, 00:22:39.532 "sequence_count": 2048, 00:22:39.532 "buf_count": 2048 00:22:39.532 } 00:22:39.532 } 00:22:39.532 ] 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "subsystem": "bdev", 00:22:39.532 "config": [ 00:22:39.532 { 00:22:39.532 "method": "bdev_set_options", 00:22:39.532 "params": { 00:22:39.532 "bdev_io_pool_size": 65535, 00:22:39.532 "bdev_io_cache_size": 256, 00:22:39.532 "bdev_auto_examine": true, 00:22:39.532 "iobuf_small_cache_size": 128, 00:22:39.532 "iobuf_large_cache_size": 16 00:22:39.532 } 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "method": "bdev_raid_set_options", 00:22:39.532 "params": { 00:22:39.532 "process_window_size_kb": 1024 00:22:39.532 } 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "method": "bdev_iscsi_set_options", 00:22:39.532 "params": { 00:22:39.532 "timeout_sec": 30 00:22:39.532 } 00:22:39.532 }, 00:22:39.532 { 00:22:39.532 "method": "bdev_nvme_set_options", 00:22:39.532 "params": { 00:22:39.532 "action_on_timeout": "none", 00:22:39.532 "timeout_us": 0, 00:22:39.532 "timeout_admin_us": 0, 00:22:39.532 "keep_alive_timeout_ms": 10000, 00:22:39.532 "arbitration_burst": 0, 00:22:39.532 "low_priority_weight": 0, 00:22:39.532 "medium_priority_weight": 0, 00:22:39.532 "high_priority_weight": 0, 00:22:39.532 "nvme_adminq_poll_period_us": 10000, 00:22:39.532 "nvme_ioq_poll_period_us": 0, 00:22:39.532 "io_queue_requests": 512, 00:22:39.532 "delay_cmd_submit": true, 00:22:39.532 "transport_retry_count": 4, 00:22:39.532 "bdev_retry_count": 3, 00:22:39.532 "transport_ack_timeout": 0, 00:22:39.532 "ctrlr_loss_timeout_sec": 0, 00:22:39.532 "reconnect_delay_sec": 0, 00:22:39.532 "fast_io_fail_timeout_sec": 0, 00:22:39.532 "disable_auto_failback": false, 00:22:39.532 "generate_uuids": false, 00:22:39.532 "transport_tos": 0, 00:22:39.532 "nvme_error_stat": false, 00:22:39.532 "rdma_srq_size": 0, 00:22:39.532 "io_path_stat": false, 00:22:39.532 "allow_accel_sequence": false, 00:22:39.532 "rdma_max_cq_size": 0, 00:22:39.532 "rdma_cm_event_timeout_ms": 0, 00:22:39.532 "dhchap_digests": [ 00:22:39.532 "sha256", 00:22:39.532 "sha384", 00:22:39.532 "sha512" 00:22:39.532 ], 00:22:39.532 "dhchap_dhgroups": [ 00:22:39.532 "null", 00:22:39.532 "ffdhe2048", 00:22:39.532 "ffdhe3072", 00:22:39.533 "ffdhe4096", 00:22:39.533 "ffdhe6144", 00:22:39.533 "ffdhe8192" 00:22:39.533 ] 00:22:39.533 } 00:22:39.533 }, 00:22:39.533 { 00:22:39.533 "method": "bdev_nvme_attach_controller", 00:22:39.533 "params": { 00:22:39.533 "name": "nvme0", 00:22:39.533 "trtype": "TCP", 00:22:39.533 "adrfam": "IPv4", 00:22:39.533 "traddr": "10.0.0.2", 00:22:39.533 "trsvcid": "4420", 00:22:39.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.533 "prchk_reftag": false, 00:22:39.533 "prchk_guard": false, 00:22:39.533 "ctrlr_loss_timeout_sec": 0, 00:22:39.533 "reconnect_delay_sec": 0, 00:22:39.533 "fast_io_fail_timeout_sec": 0, 00:22:39.533 "psk": "key0", 00:22:39.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.533 "hdgst": false, 00:22:39.533 "ddgst": false 00:22:39.533 } 00:22:39.533 }, 00:22:39.533 { 00:22:39.533 "method": "bdev_nvme_set_hotplug", 00:22:39.533 "params": { 00:22:39.533 "period_us": 100000, 00:22:39.533 "enable": false 00:22:39.533 } 00:22:39.533 }, 00:22:39.533 { 00:22:39.533 "method": "bdev_enable_histogram", 00:22:39.533 "params": { 00:22:39.533 "name": "nvme0n1", 00:22:39.533 "enable": true 00:22:39.533 } 00:22:39.533 }, 00:22:39.533 { 00:22:39.533 "method": "bdev_wait_for_examine" 00:22:39.533 } 00:22:39.533 ] 00:22:39.533 }, 00:22:39.533 { 00:22:39.533 "subsystem": "nbd", 00:22:39.533 "config": [] 00:22:39.533 } 00:22:39.533 ] 00:22:39.533 }' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1674000 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1674000 ']' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1674000 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1674000 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1674000' 00:22:39.533 killing process with pid 1674000 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1674000 00:22:39.533 Received shutdown signal, test time was about 1.000000 seconds 00:22:39.533 00:22:39.533 Latency(us) 00:22:39.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.533 =================================================================================================================== 00:22:39.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1674000 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1673977 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1673977 ']' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1673977 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.533 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1673977 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1673977' 00:22:39.792 killing process with pid 1673977 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1673977 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1673977 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.792 19:28:50 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:39.792 "subsystems": [ 00:22:39.792 { 00:22:39.792 "subsystem": "keyring", 00:22:39.792 "config": [ 00:22:39.792 { 00:22:39.792 "method": "keyring_file_add_key", 00:22:39.792 "params": { 00:22:39.792 "name": "key0", 00:22:39.792 "path": "/tmp/tmp.IMYo7rCYwo" 00:22:39.792 } 00:22:39.792 } 00:22:39.792 ] 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "subsystem": "iobuf", 00:22:39.792 "config": [ 00:22:39.792 { 00:22:39.792 "method": "iobuf_set_options", 00:22:39.792 "params": { 00:22:39.792 "small_pool_count": 8192, 00:22:39.792 "large_pool_count": 1024, 00:22:39.792 "small_bufsize": 8192, 00:22:39.792 "large_bufsize": 135168 00:22:39.792 } 00:22:39.792 } 00:22:39.792 ] 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "subsystem": "sock", 00:22:39.792 "config": [ 00:22:39.792 { 00:22:39.792 "method": "sock_set_default_impl", 00:22:39.792 "params": { 00:22:39.792 "impl_name": "posix" 00:22:39.792 } 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "method": "sock_impl_set_options", 00:22:39.792 "params": { 00:22:39.792 "impl_name": "ssl", 00:22:39.792 "recv_buf_size": 4096, 00:22:39.792 "send_buf_size": 4096, 00:22:39.792 "enable_recv_pipe": true, 00:22:39.792 "enable_quickack": false, 00:22:39.792 "enable_placement_id": 0, 00:22:39.792 "enable_zerocopy_send_server": true, 00:22:39.792 "enable_zerocopy_send_client": false, 00:22:39.792 "zerocopy_threshold": 0, 00:22:39.792 "tls_version": 0, 00:22:39.792 "enable_ktls": false 00:22:39.792 } 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "method": "sock_impl_set_options", 00:22:39.792 "params": { 00:22:39.792 "impl_name": "posix", 00:22:39.792 "recv_buf_size": 2097152, 00:22:39.792 "send_buf_size": 2097152, 00:22:39.792 "enable_recv_pipe": true, 00:22:39.792 "enable_quickack": false, 00:22:39.792 "enable_placement_id": 0, 00:22:39.792 "enable_zerocopy_send_server": true, 00:22:39.792 "enable_zerocopy_send_client": false, 00:22:39.792 "zerocopy_threshold": 0, 00:22:39.792 "tls_version": 0, 00:22:39.792 "enable_ktls": false 00:22:39.792 } 00:22:39.792 } 00:22:39.792 ] 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "subsystem": "vmd", 00:22:39.792 "config": [] 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "subsystem": "accel", 00:22:39.792 "config": [ 00:22:39.792 { 00:22:39.792 "method": "accel_set_options", 00:22:39.792 "params": { 00:22:39.792 "small_cache_size": 128, 00:22:39.792 "large_cache_size": 16, 00:22:39.792 "task_count": 2048, 00:22:39.792 "sequence_count": 2048, 00:22:39.792 "buf_count": 2048 00:22:39.792 } 00:22:39.792 } 00:22:39.792 ] 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "subsystem": "bdev", 00:22:39.792 "config": [ 00:22:39.792 { 00:22:39.792 "method": "bdev_set_options", 00:22:39.792 "params": { 00:22:39.792 "bdev_io_pool_size": 65535, 00:22:39.792 "bdev_io_cache_size": 256, 00:22:39.792 "bdev_auto_examine": true, 00:22:39.792 "iobuf_small_cache_size": 128, 00:22:39.792 "iobuf_large_cache_size": 16 00:22:39.792 } 00:22:39.792 }, 00:22:39.792 { 00:22:39.792 "method": "bdev_raid_set_options", 00:22:39.792 "params": { 00:22:39.792 "process_window_size_kb": 1024 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "bdev_iscsi_set_options", 00:22:39.793 "params": { 00:22:39.793 "timeout_sec": 30 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "bdev_nvme_set_options", 00:22:39.793 "params": { 00:22:39.793 "action_on_timeout": "none", 00:22:39.793 "timeout_us": 0, 00:22:39.793 "timeout_admin_us": 0, 00:22:39.793 "keep_alive_timeout_ms": 10000, 00:22:39.793 "arbitration_burst": 0, 00:22:39.793 "low_priority_weight": 0, 00:22:39.793 "medium_priority_weight": 0, 00:22:39.793 "high_priority_weight": 0, 00:22:39.793 "nvme_adminq_poll_period_us": 10000, 00:22:39.793 "nvme_ioq_poll_period_us": 0, 00:22:39.793 "io_queue_requests": 0, 00:22:39.793 "delay_cmd_submit": true, 00:22:39.793 "transport_retry_count": 4, 00:22:39.793 "bdev_retry_count": 3, 00:22:39.793 "transport_ack_timeout": 0, 00:22:39.793 "ctrlr_loss_timeout_sec": 0, 00:22:39.793 "reconnect_delay_sec": 0, 00:22:39.793 "fast_io_fail_timeout_sec": 0, 00:22:39.793 "disable_auto_failback": false, 00:22:39.793 "generate_uuids": false, 00:22:39.793 "transport_tos": 0, 00:22:39.793 "nvme_error_stat": false, 00:22:39.793 "rdma_srq_size": 0, 00:22:39.793 "io_path_stat": false, 00:22:39.793 "allow_accel_sequence": false, 00:22:39.793 "rdma_max_cq_size": 0, 00:22:39.793 "rdma_cm_event_timeout_ms": 0, 00:22:39.793 "dhchap_digests": [ 00:22:39.793 "sha256", 00:22:39.793 "sha384", 00:22:39.793 "sha512" 00:22:39.793 ], 00:22:39.793 "dhchap_dhgroups": [ 00:22:39.793 "null", 00:22:39.793 "ffdhe2048", 00:22:39.793 "ffdhe3072", 00:22:39.793 "ffdhe4096", 00:22:39.793 "ffdhe6144", 00:22:39.793 "ffdhe8192" 00:22:39.793 ] 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "bdev_nvme_set_hotplug", 00:22:39.793 "params": { 00:22:39.793 "period_us": 100000, 00:22:39.793 "enable": false 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "bdev_malloc_create", 00:22:39.793 "params": { 00:22:39.793 "name": "malloc0", 00:22:39.793 "num_blocks": 8192, 00:22:39.793 "block_size": 4096, 00:22:39.793 "physical_block_size": 4096, 00:22:39.793 "uuid": "11ef6452-f8dd-4a25-a34d-422f22ef8c58", 00:22:39.793 "optimal_io_boundary": 0 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "bdev_wait_for_examine" 00:22:39.793 } 00:22:39.793 ] 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "subsystem": "nbd", 00:22:39.793 "config": [] 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "subsystem": "scheduler", 00:22:39.793 "config": [ 00:22:39.793 { 00:22:39.793 "method": "framework_set_scheduler", 00:22:39.793 "params": { 00:22:39.793 "name": "static" 00:22:39.793 } 00:22:39.793 } 00:22:39.793 ] 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "subsystem": "nvmf", 00:22:39.793 "config": [ 00:22:39.793 { 00:22:39.793 "method": "nvmf_set_config", 00:22:39.793 "params": { 00:22:39.793 "discovery_filter": "match_any", 00:22:39.793 "admin_cmd_passthru": { 00:22:39.793 "identify_ctrlr": false 00:22:39.793 } 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_set_max_subsystems", 00:22:39.793 "params": { 00:22:39.793 "max_subsystems": 1024 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_set_crdt", 00:22:39.793 "params": { 00:22:39.793 "crdt1": 0, 00:22:39.793 "crdt2": 0, 00:22:39.793 "crdt3": 0 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_create_transport", 00:22:39.793 "params": { 00:22:39.793 "trtype": "TCP", 00:22:39.793 "max_queue_depth": 128, 00:22:39.793 "max_io_qpairs_per_ctrlr": 127, 00:22:39.793 "in_capsule_data_size": 4096, 00:22:39.793 "max_io_size": 131072, 00:22:39.793 "io_unit_size": 131072, 00:22:39.793 "max_aq_depth": 128, 00:22:39.793 "num_shared_buffers": 511, 00:22:39.793 "buf_cache_size": 4294967295, 00:22:39.793 "dif_insert_or_strip": false, 00:22:39.793 "zcopy": false, 00:22:39.793 "c2h_success": false, 00:22:39.793 "sock_priority": 0, 00:22:39.793 "abort_timeout_sec": 1, 00:22:39.793 "ack_timeout": 0, 00:22:39.793 "data_wr_pool_size": 0 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_create_subsystem", 00:22:39.793 "params": { 00:22:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.793 "allow_any_host": false, 00:22:39.793 "serial_number": "00000000000000000000", 00:22:39.793 "model_number": "SPDK bdev Controller", 00:22:39.793 "max_namespaces": 32, 00:22:39.793 "min_cntlid": 1, 00:22:39.793 "max_cntlid": 65519, 00:22:39.793 "ana_reporting": false 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_subsystem_add_host", 00:22:39.793 "params": { 00:22:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.793 "host": "nqn.2016-06.io.spdk:host1", 00:22:39.793 "psk": "key0" 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_subsystem_add_ns", 00:22:39.793 "params": { 00:22:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.793 "namespace": { 00:22:39.793 "nsid": 1, 00:22:39.793 "bdev_name": "malloc0", 00:22:39.793 "nguid": "11EF6452F8DD4A25A34D422F22EF8C58", 00:22:39.793 "uuid": "11ef6452-f8dd-4a25-a34d-422f22ef8c58", 00:22:39.793 "no_auto_visible": false 00:22:39.793 } 00:22:39.793 } 00:22:39.793 }, 00:22:39.793 { 00:22:39.793 "method": "nvmf_subsystem_add_listener", 00:22:39.793 "params": { 00:22:39.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.793 "listen_address": { 00:22:39.793 "trtype": "TCP", 00:22:39.793 "adrfam": "IPv4", 00:22:39.793 "traddr": "10.0.0.2", 00:22:39.793 "trsvcid": "4420" 00:22:39.793 }, 00:22:39.793 "secure_channel": false, 00:22:39.793 "sock_impl": "ssl" 00:22:39.793 } 00:22:39.793 } 00:22:39.793 ] 00:22:39.793 } 00:22:39.793 ] 00:22:39.793 }' 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1674477 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1674477 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1674477 ']' 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.793 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.794 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.794 19:28:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.052 [2024-07-15 19:28:50.652232] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:40.052 [2024-07-15 19:28:50.652281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.052 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.052 [2024-07-15 19:28:50.680794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.052 [2024-07-15 19:28:50.708347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.052 [2024-07-15 19:28:50.748023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.052 [2024-07-15 19:28:50.748061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.052 [2024-07-15 19:28:50.748068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.052 [2024-07-15 19:28:50.748074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.052 [2024-07-15 19:28:50.748079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.052 [2024-07-15 19:28:50.748150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.312 [2024-07-15 19:28:50.952693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.312 [2024-07-15 19:28:50.984733] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.312 [2024-07-15 19:28:50.996536] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.879 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1674613 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1674613 /var/tmp/bdevperf.sock 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1674613 ']' 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:40.880 "subsystems": [ 00:22:40.880 { 00:22:40.880 "subsystem": "keyring", 00:22:40.880 "config": [ 00:22:40.880 { 00:22:40.880 "method": "keyring_file_add_key", 00:22:40.880 "params": { 00:22:40.880 "name": "key0", 00:22:40.880 "path": "/tmp/tmp.IMYo7rCYwo" 00:22:40.880 } 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "iobuf", 00:22:40.880 "config": [ 00:22:40.880 { 00:22:40.880 "method": "iobuf_set_options", 00:22:40.880 "params": { 00:22:40.880 "small_pool_count": 8192, 00:22:40.880 "large_pool_count": 1024, 00:22:40.880 "small_bufsize": 8192, 00:22:40.880 "large_bufsize": 135168 00:22:40.880 } 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "sock", 00:22:40.880 "config": [ 00:22:40.880 { 00:22:40.880 "method": "sock_set_default_impl", 00:22:40.880 "params": { 00:22:40.880 "impl_name": "posix" 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "sock_impl_set_options", 00:22:40.880 "params": { 00:22:40.880 "impl_name": "ssl", 00:22:40.880 "recv_buf_size": 4096, 00:22:40.880 "send_buf_size": 4096, 00:22:40.880 "enable_recv_pipe": true, 00:22:40.880 "enable_quickack": false, 00:22:40.880 "enable_placement_id": 0, 00:22:40.880 "enable_zerocopy_send_server": true, 00:22:40.880 "enable_zerocopy_send_client": false, 00:22:40.880 "zerocopy_threshold": 0, 00:22:40.880 "tls_version": 0, 00:22:40.880 "enable_ktls": false 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "sock_impl_set_options", 00:22:40.880 "params": { 00:22:40.880 "impl_name": "posix", 00:22:40.880 "recv_buf_size": 2097152, 00:22:40.880 "send_buf_size": 2097152, 00:22:40.880 "enable_recv_pipe": true, 00:22:40.880 "enable_quickack": false, 00:22:40.880 "enable_placement_id": 0, 00:22:40.880 "enable_zerocopy_send_server": true, 00:22:40.880 "enable_zerocopy_send_client": false, 00:22:40.880 "zerocopy_threshold": 0, 00:22:40.880 "tls_version": 0, 00:22:40.880 "enable_ktls": false 00:22:40.880 } 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "vmd", 00:22:40.880 "config": [] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "accel", 00:22:40.880 "config": [ 00:22:40.880 { 00:22:40.880 "method": "accel_set_options", 00:22:40.880 "params": { 00:22:40.880 "small_cache_size": 128, 00:22:40.880 "large_cache_size": 16, 00:22:40.880 "task_count": 2048, 00:22:40.880 "sequence_count": 2048, 00:22:40.880 "buf_count": 2048 00:22:40.880 } 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "bdev", 00:22:40.880 "config": [ 00:22:40.880 { 00:22:40.880 "method": "bdev_set_options", 00:22:40.880 "params": { 00:22:40.880 "bdev_io_pool_size": 65535, 00:22:40.880 "bdev_io_cache_size": 256, 00:22:40.880 "bdev_auto_examine": true, 00:22:40.880 "iobuf_small_cache_size": 128, 00:22:40.880 "iobuf_large_cache_size": 16 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_raid_set_options", 00:22:40.880 "params": { 00:22:40.880 "process_window_size_kb": 1024 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_iscsi_set_options", 00:22:40.880 "params": { 00:22:40.880 "timeout_sec": 30 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_nvme_set_options", 00:22:40.880 "params": { 00:22:40.880 "action_on_timeout": "none", 00:22:40.880 "timeout_us": 0, 00:22:40.880 "timeout_admin_us": 0, 00:22:40.880 "keep_alive_timeout_ms": 10000, 00:22:40.880 "arbitration_burst": 0, 00:22:40.880 "low_priority_weight": 0, 00:22:40.880 "medium_priority_weight": 0, 00:22:40.880 "high_priority_weight": 0, 00:22:40.880 "nvme_adminq_poll_period_us": 10000, 00:22:40.880 "nvme_ioq_poll_period_us": 0, 00:22:40.880 "io_queue_requests": 512, 00:22:40.880 "delay_cmd_submit": true, 00:22:40.880 "transport_retry_count": 4, 00:22:40.880 "bdev_retry_count": 3, 00:22:40.880 "transport_ack_timeout": 0, 00:22:40.880 "ctrlr_loss_timeout_sec": 0, 00:22:40.880 "reconnect_delay_sec": 0, 00:22:40.880 "fast_io_fail_timeout_sec": 0, 00:22:40.880 "disable_auto_failback": false, 00:22:40.880 "generate_uuids": false, 00:22:40.880 "transport_tos": 0, 00:22:40.880 "nvme_error_stat": false, 00:22:40.880 "rdma_srq_size": 0, 00:22:40.880 "io_path_stat": false, 00:22:40.880 "allow_accel_sequence": false, 00:22:40.880 "rdma_max_cq_size": 0, 00:22:40.880 "rdma_cm_event_timeout_ms": 0, 00:22:40.880 "dhchap_digests": [ 00:22:40.880 "sha256", 00:22:40.880 "sha384", 00:22:40.880 "sha512" 00:22:40.880 ], 00:22:40.880 "dhchap_dhgroups": [ 00:22:40.880 "null", 00:22:40.880 "ffdhe2048", 00:22:40.880 "ffdhe3072", 00:22:40.880 "ffdhe4096", 00:22:40.880 "ffdhe6144", 00:22:40.880 "ffdhe8192" 00:22:40.880 ] 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_nvme_attach_controller", 00:22:40.880 "params": { 00:22:40.880 "name": "nvme0", 00:22:40.880 "trtype": "TCP", 00:22:40.880 "adrfam": "IPv4", 00:22:40.880 "traddr": "10.0.0.2", 00:22:40.880 "trsvcid": "4420", 00:22:40.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.880 "prchk_reftag": false, 00:22:40.880 "prchk_guard": false, 00:22:40.880 "ctrlr_loss_timeout_sec": 0, 00:22:40.880 "reconnect_delay_sec": 0, 00:22:40.880 "fast_io_fail_timeout_sec": 0, 00:22:40.880 "psk": "key0", 00:22:40.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.880 "hdgst": false, 00:22:40.880 "ddgst": false 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_nvme_set_hotplug", 00:22:40.880 "params": { 00:22:40.880 "period_us": 100000, 00:22:40.880 "enable": false 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_enable_histogram", 00:22:40.880 "params": { 00:22:40.880 "name": "nvme0n1", 00:22:40.880 "enable": true 00:22:40.880 } 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "method": "bdev_wait_for_examine" 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }, 00:22:40.880 { 00:22:40.880 "subsystem": "nbd", 00:22:40.880 "config": [] 00:22:40.880 } 00:22:40.880 ] 00:22:40.880 }' 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.880 19:28:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.880 [2024-07-15 19:28:51.532152] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:40.880 [2024-07-15 19:28:51.532202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674613 ] 00:22:40.880 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.880 [2024-07-15 19:28:51.558644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.880 [2024-07-15 19:28:51.587230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.880 [2024-07-15 19:28:51.628590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.138 [2024-07-15 19:28:51.775026] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.704 19:28:52 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.962 Running I/O for 1 seconds... 00:22:42.896 00:22:42.896 Latency(us) 00:22:42.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.896 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:42.896 Verification LBA range: start 0x0 length 0x2000 00:22:42.897 nvme0n1 : 1.02 5414.21 21.15 0.00 0.00 23440.45 5784.26 35788.35 00:22:42.897 =================================================================================================================== 00:22:42.897 Total : 5414.21 21.15 0.00 0.00 23440.45 5784.26 35788.35 00:22:42.897 0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:42.897 nvmf_trace.0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1674613 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1674613 ']' 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1674613 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.897 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1674613 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1674613' 00:22:43.155 killing process with pid 1674613 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1674613 00:22:43.155 Received shutdown signal, test time was about 1.000000 seconds 00:22:43.155 00:22:43.155 Latency(us) 00:22:43.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.155 =================================================================================================================== 00:22:43.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1674613 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.155 19:28:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.155 rmmod nvme_tcp 00:22:43.155 rmmod nvme_fabrics 00:22:43.155 rmmod nvme_keyring 00:22:43.414 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.414 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:43.414 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1674477 ']' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1674477 ']' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1674477' 00:22:43.415 killing process with pid 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1674477 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.415 19:28:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.950 19:28:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.950 19:28:56 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3Qt3Pg8nkI /tmp/tmp.WOIWxY69JM /tmp/tmp.IMYo7rCYwo 00:22:45.950 00:22:45.950 real 1m13.642s 00:22:45.950 user 1m52.643s 00:22:45.950 sys 0m26.883s 00:22:45.950 19:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.950 19:28:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.950 ************************************ 00:22:45.950 END TEST nvmf_tls 00:22:45.950 ************************************ 00:22:45.950 19:28:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:45.950 19:28:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:45.950 19:28:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:45.950 19:28:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.950 19:28:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.950 ************************************ 00:22:45.950 START TEST nvmf_fips 00:22:45.950 ************************************ 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:45.950 * Looking for test storage... 00:22:45.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:45.950 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:45.951 Error setting digest 00:22:45.951 00821D2ACF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:45.951 00821D2ACF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.951 19:28:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.223 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.224 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.224 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.224 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:22:51.483 00:22:51.483 --- 10.0.0.2 ping statistics --- 00:22:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.483 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:51.483 00:22:51.483 --- 10.0.0.1 ping statistics --- 00:22:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.483 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1678510 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1678510 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1678510 ']' 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.483 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.742 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.742 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.742 19:29:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:51.742 [2024-07-15 19:29:02.399849] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:51.742 [2024-07-15 19:29:02.399895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.743 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.743 [2024-07-15 19:29:02.429272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:51.743 [2024-07-15 19:29:02.457078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.743 [2024-07-15 19:29:02.498000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.743 [2024-07-15 19:29:02.498035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.743 [2024-07-15 19:29:02.498042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.743 [2024-07-15 19:29:02.498048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.743 [2024-07-15 19:29:02.498054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.743 [2024-07-15 19:29:02.498080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.673 [2024-07-15 19:29:03.360864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.673 [2024-07-15 19:29:03.376876] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.673 [2024-07-15 19:29:03.377056] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.673 [2024-07-15 19:29:03.405065] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.673 malloc0 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1678759 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1678759 /var/tmp/bdevperf.sock 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1678759 ']' 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.673 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.673 [2024-07-15 19:29:03.481315] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:22:52.673 [2024-07-15 19:29:03.481363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678759 ] 00:22:52.673 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.673 [2024-07-15 19:29:03.506430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.930 [2024-07-15 19:29:03.531790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.930 [2024-07-15 19:29:03.575186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.930 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.930 19:29:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:52.930 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:53.189 [2024-07-15 19:29:03.806964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.189 [2024-07-15 19:29:03.807062] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.189 TLSTESTn1 00:22:53.189 19:29:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.189 Running I/O for 10 seconds... 00:23:03.194 00:23:03.194 Latency(us) 00:23:03.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.194 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.194 Verification LBA range: start 0x0 length 0x2000 00:23:03.194 TLSTESTn1 : 10.03 3789.49 14.80 0.00 0.00 33718.38 6468.12 68841.29 00:23:03.194 =================================================================================================================== 00:23:03.194 Total : 3789.49 14.80 0.00 0.00 33718.38 6468.12 68841.29 00:23:03.194 0 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:03.194 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:03.453 nvmf_trace.0 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1678759 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1678759 ']' 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1678759 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1678759 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1678759' 00:23:03.453 killing process with pid 1678759 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1678759 00:23:03.453 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.453 00:23:03.453 Latency(us) 00:23:03.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.453 =================================================================================================================== 00:23:03.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.453 [2024-07-15 19:29:14.172992] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.453 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1678759 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.712 rmmod nvme_tcp 00:23:03.712 rmmod nvme_fabrics 00:23:03.712 rmmod nvme_keyring 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1678510 ']' 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1678510 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1678510 ']' 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1678510 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1678510 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1678510' 00:23:03.712 killing process with pid 1678510 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1678510 00:23:03.712 [2024-07-15 19:29:14.467590] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:03.712 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1678510 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.971 19:29:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.879 19:29:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.879 19:29:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:05.879 00:23:05.879 real 0m20.327s 00:23:05.879 user 0m21.020s 00:23:05.879 sys 0m9.308s 00:23:05.879 19:29:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.879 19:29:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:05.879 ************************************ 00:23:05.879 END TEST nvmf_fips 00:23:05.879 ************************************ 00:23:06.138 19:29:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:06.138 19:29:16 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:06.138 19:29:16 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:06.138 19:29:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:06.138 19:29:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.138 19:29:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.138 ************************************ 00:23:06.138 START TEST nvmf_fuzz 00:23:06.138 ************************************ 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:06.138 * Looking for test storage... 00:23:06.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.138 19:29:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.139 19:29:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.410 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.410 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.410 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.411 19:29:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:11.411 00:23:11.411 --- 10.0.0.2 ping statistics --- 00:23:11.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.411 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:23:11.411 00:23:11.411 --- 10.0.0.1 ping statistics --- 00:23:11.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.411 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1683891 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1683891 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1683891 ']' 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.411 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 Malloc0 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:11.670 19:29:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:43.740 Fuzzing completed. Shutting down the fuzz application 00:23:43.740 00:23:43.740 Dumping successful admin opcodes: 00:23:43.740 8, 9, 10, 24, 00:23:43.740 Dumping successful io opcodes: 00:23:43.740 0, 9, 00:23:43.740 NS: 0x200003aeff00 I/O qp, Total commands completed: 877599, total successful commands: 5101, random_seed: 912942080 00:23:43.740 NS: 0x200003aeff00 admin qp, Total commands completed: 83998, total successful commands: 668, random_seed: 1144311040 00:23:43.741 19:29:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:43.741 Fuzzing completed. Shutting down the fuzz application 00:23:43.741 00:23:43.741 Dumping successful admin opcodes: 00:23:43.741 24, 00:23:43.741 Dumping successful io opcodes: 00:23:43.741 00:23:43.741 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1234846618 00:23:43.741 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1234920454 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.741 rmmod nvme_tcp 00:23:43.741 rmmod nvme_fabrics 00:23:43.741 rmmod nvme_keyring 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1683891 ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1683891 ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1683891' 00:23:43.741 killing process with pid 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1683891 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.741 19:29:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.647 19:29:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.647 19:29:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:45.647 00:23:45.647 real 0m39.638s 00:23:45.647 user 0m52.252s 00:23:45.647 sys 0m16.757s 00:23:45.647 19:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:45.647 19:29:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.647 ************************************ 00:23:45.647 END TEST nvmf_fuzz 00:23:45.647 ************************************ 00:23:45.647 19:29:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:45.647 19:29:56 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:45.647 19:29:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:45.647 19:29:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.647 19:29:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.907 ************************************ 00:23:45.907 START TEST nvmf_multiconnection 00:23:45.907 ************************************ 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:45.907 * Looking for test storage... 00:23:45.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.907 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:45.908 19:29:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:51.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:51.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:51.224 Found net devices under 0000:86:00.0: cvl_0_0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:51.224 Found net devices under 0000:86:00.1: cvl_0_1 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:51.224 00:23:51.224 --- 10.0.0.2 ping statistics --- 00:23:51.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.224 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:23:51.224 00:23:51.224 --- 10.0.0.1 ping statistics --- 00:23:51.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.224 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1692538 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1692538 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1692538 ']' 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 [2024-07-15 19:30:01.782616] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:23:51.224 [2024-07-15 19:30:01.782663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.224 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.224 [2024-07-15 19:30:01.811814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.224 [2024-07-15 19:30:01.838702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.224 [2024-07-15 19:30:01.881790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.224 [2024-07-15 19:30:01.881830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.224 [2024-07-15 19:30:01.881837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.224 [2024-07-15 19:30:01.881843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.224 [2024-07-15 19:30:01.881848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.224 [2024-07-15 19:30:01.881890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.224 [2024-07-15 19:30:01.881985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.224 [2024-07-15 19:30:01.882063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.224 [2024-07-15 19:30:01.882064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.224 19:30:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 [2024-07-15 19:30:02.021129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.224 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.225 Malloc1 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.225 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.225 [2024-07-15 19:30:02.076671] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 Malloc2 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 Malloc3 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 Malloc4 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 Malloc5 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 Malloc6 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 Malloc7 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.484 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 Malloc8 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 Malloc9 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.742 Malloc10 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:51.742 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 Malloc11 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.743 19:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:53.114 19:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:53.114 19:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:53.114 19:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:53.114 19:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:53.114 19:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.011 19:30:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:56.394 19:30:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:56.394 19:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:56.394 19:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:56.394 19:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:56.394 19:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.298 19:30:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:59.676 19:30:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:59.676 19:30:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.676 19:30:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.676 19:30:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:23:59.676 19:30:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.585 19:30:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:02.983 19:30:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:02.983 19:30:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:02.983 19:30:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.983 19:30:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:02.983 19:30:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:04.882 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:04.882 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.883 19:30:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:06.258 19:30:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:06.258 19:30:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:06.258 19:30:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:06.258 19:30:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:06.258 19:30:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.162 19:30:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:09.540 19:30:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:09.540 19:30:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:09.540 19:30:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.540 19:30:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:09.540 19:30:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:11.513 19:30:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.513 19:30:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:12.891 19:30:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:12.891 19:30:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.891 19:30:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.891 19:30:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:12.891 19:30:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.795 19:30:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:16.171 19:30:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:16.171 19:30:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:16.171 19:30:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.171 19:30:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:16.171 19:30:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.074 19:30:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:19.449 19:30:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:19.449 19:30:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:19.449 19:30:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.449 19:30:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:19.450 19:30:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.354 19:30:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:22.731 19:30:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:22.731 19:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:22.731 19:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.731 19:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:22.731 19:30:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:24.632 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:24.632 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:24.633 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:24.891 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:24.891 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.891 19:30:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:24.891 19:30:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.891 19:30:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:26.269 19:30:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:26.269 19:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:26.269 19:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.269 19:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:26.269 19:30:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:28.172 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:28.173 19:30:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:28.173 [global] 00:24:28.173 thread=1 00:24:28.173 invalidate=1 00:24:28.173 rw=read 00:24:28.173 time_based=1 00:24:28.173 runtime=10 00:24:28.173 ioengine=libaio 00:24:28.173 direct=1 00:24:28.173 bs=262144 00:24:28.173 iodepth=64 00:24:28.173 norandommap=1 00:24:28.173 numjobs=1 00:24:28.173 00:24:28.173 [job0] 00:24:28.173 filename=/dev/nvme0n1 00:24:28.173 [job1] 00:24:28.173 filename=/dev/nvme10n1 00:24:28.173 [job2] 00:24:28.173 filename=/dev/nvme1n1 00:24:28.173 [job3] 00:24:28.173 filename=/dev/nvme2n1 00:24:28.173 [job4] 00:24:28.173 filename=/dev/nvme3n1 00:24:28.173 [job5] 00:24:28.173 filename=/dev/nvme4n1 00:24:28.173 [job6] 00:24:28.173 filename=/dev/nvme5n1 00:24:28.173 [job7] 00:24:28.173 filename=/dev/nvme6n1 00:24:28.173 [job8] 00:24:28.173 filename=/dev/nvme7n1 00:24:28.173 [job9] 00:24:28.173 filename=/dev/nvme8n1 00:24:28.173 [job10] 00:24:28.173 filename=/dev/nvme9n1 00:24:28.463 Could not set queue depth (nvme0n1) 00:24:28.463 Could not set queue depth (nvme10n1) 00:24:28.463 Could not set queue depth (nvme1n1) 00:24:28.463 Could not set queue depth (nvme2n1) 00:24:28.463 Could not set queue depth (nvme3n1) 00:24:28.463 Could not set queue depth (nvme4n1) 00:24:28.463 Could not set queue depth (nvme5n1) 00:24:28.463 Could not set queue depth (nvme6n1) 00:24:28.463 Could not set queue depth (nvme7n1) 00:24:28.463 Could not set queue depth (nvme8n1) 00:24:28.463 Could not set queue depth (nvme9n1) 00:24:28.721 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:28.721 fio-3.35 00:24:28.721 Starting 11 threads 00:24:40.984 00:24:40.984 job0: (groupid=0, jobs=1): err= 0: pid=1699378: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=792, BW=198MiB/s (208MB/s)(2001MiB/10101msec) 00:24:40.984 slat (usec): min=8, max=127529, avg=881.75, stdev=4276.54 00:24:40.984 clat (msec): min=2, max=301, avg=79.80, stdev=46.44 00:24:40.984 lat (msec): min=2, max=301, avg=80.68, stdev=47.03 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 27], 20.00th=[ 34], 00:24:40.984 | 30.00th=[ 45], 40.00th=[ 61], 50.00th=[ 73], 60.00th=[ 89], 00:24:40.984 | 70.00th=[ 105], 80.00th=[ 123], 90.00th=[ 144], 95.00th=[ 165], 00:24:40.984 | 99.00th=[ 194], 99.50th=[ 213], 99.90th=[ 266], 99.95th=[ 271], 00:24:40.984 | 99.99th=[ 300] 00:24:40.984 bw ( KiB/s): min=113152, max=398848, per=9.37%, avg=203238.40, stdev=75716.71, samples=20 00:24:40.984 iops : min= 442, max= 1558, avg=793.90, stdev=295.77, samples=20 00:24:40.984 lat (msec) : 4=0.06%, 10=0.87%, 20=4.55%, 50=27.64%, 100=34.33% 00:24:40.984 lat (msec) : 250=32.34%, 500=0.20% 00:24:40.984 cpu : usr=0.29%, sys=2.98%, ctx=1798, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=8002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job1: (groupid=0, jobs=1): err= 0: pid=1699379: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=679, BW=170MiB/s (178MB/s)(1715MiB/10097msec) 00:24:40.984 slat (usec): min=8, max=157376, avg=1074.42, stdev=5016.12 00:24:40.984 clat (usec): min=869, max=325186, avg=93055.96, stdev=46951.95 00:24:40.984 lat (usec): min=898, max=325241, avg=94130.38, stdev=47657.46 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 39], 20.00th=[ 55], 00:24:40.984 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 102], 00:24:40.984 | 70.00th=[ 116], 80.00th=[ 130], 90.00th=[ 153], 95.00th=[ 176], 00:24:40.984 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 305], 00:24:40.984 | 99.99th=[ 326] 00:24:40.984 bw ( KiB/s): min=72192, max=278528, per=8.02%, avg=174001.55, stdev=52578.14, samples=20 00:24:40.984 iops : min= 282, max= 1088, avg=679.65, stdev=205.33, samples=20 00:24:40.984 lat (usec) : 1000=0.06% 00:24:40.984 lat (msec) : 2=0.17%, 4=0.99%, 10=2.65%, 20=1.97%, 50=10.53% 00:24:40.984 lat (msec) : 100=42.67%, 250=40.87%, 500=0.09% 00:24:40.984 cpu : usr=0.22%, sys=2.72%, ctx=1649, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=6859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job2: (groupid=0, jobs=1): err= 0: pid=1699380: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=737, BW=184MiB/s (193MB/s)(1862MiB/10097msec) 00:24:40.984 slat (usec): min=10, max=121658, avg=1021.41, stdev=4492.24 00:24:40.984 clat (usec): min=844, max=262530, avg=85673.99, stdev=48837.29 00:24:40.984 lat (usec): min=871, max=342469, avg=86695.40, stdev=49553.70 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 39], 00:24:40.984 | 30.00th=[ 60], 40.00th=[ 72], 50.00th=[ 86], 60.00th=[ 101], 00:24:40.984 | 70.00th=[ 113], 80.00th=[ 124], 90.00th=[ 148], 95.00th=[ 167], 00:24:40.984 | 99.00th=[ 220], 99.50th=[ 232], 99.90th=[ 245], 99.95th=[ 255], 00:24:40.984 | 99.99th=[ 264] 00:24:40.984 bw ( KiB/s): min=101376, max=314880, per=8.71%, avg=189004.80, stdev=60379.74, samples=20 00:24:40.984 iops : min= 396, max= 1230, avg=738.30, stdev=235.86, samples=20 00:24:40.984 lat (usec) : 1000=0.03% 00:24:40.984 lat (msec) : 2=0.03%, 4=0.56%, 10=5.48%, 20=5.36%, 50=14.50% 00:24:40.984 lat (msec) : 100=34.46%, 250=39.50%, 500=0.08% 00:24:40.984 cpu : usr=0.33%, sys=2.74%, ctx=1741, majf=0, minf=3347 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=7446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job3: (groupid=0, jobs=1): err= 0: pid=1699381: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=876, BW=219MiB/s (230MB/s)(2212MiB/10092msec) 00:24:40.984 slat (usec): min=8, max=104778, avg=793.38, stdev=3805.23 00:24:40.984 clat (usec): min=1611, max=251599, avg=72143.54, stdev=49281.48 00:24:40.984 lat (usec): min=1642, max=317799, avg=72936.92, stdev=49954.74 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 27], 00:24:40.984 | 30.00th=[ 33], 40.00th=[ 46], 50.00th=[ 61], 60.00th=[ 77], 00:24:40.984 | 70.00th=[ 101], 80.00th=[ 123], 90.00th=[ 144], 95.00th=[ 159], 00:24:40.984 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 239], 99.95th=[ 241], 00:24:40.984 | 99.99th=[ 251] 00:24:40.984 bw ( KiB/s): min=94208, max=545792, per=10.37%, avg=224872.90, stdev=113140.27, samples=20 00:24:40.984 iops : min= 368, max= 2132, avg=878.40, stdev=441.95, samples=20 00:24:40.984 lat (msec) : 2=0.14%, 4=0.29%, 10=2.35%, 20=7.61%, 50=33.33% 00:24:40.984 lat (msec) : 100=26.26%, 250=30.01%, 500=0.01% 00:24:40.984 cpu : usr=0.26%, sys=3.00%, ctx=2040, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=8846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job4: (groupid=0, jobs=1): err= 0: pid=1699382: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=832, BW=208MiB/s (218MB/s)(2088MiB/10026msec) 00:24:40.984 slat (usec): min=10, max=212559, avg=931.49, stdev=4024.52 00:24:40.984 clat (usec): min=1600, max=291413, avg=75843.63, stdev=42828.33 00:24:40.984 lat (usec): min=1640, max=351997, avg=76775.12, stdev=43363.39 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 39], 00:24:40.984 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 83], 00:24:40.984 | 70.00th=[ 97], 80.00th=[ 114], 90.00th=[ 132], 95.00th=[ 146], 00:24:40.984 | 99.00th=[ 197], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:24:40.984 | 99.99th=[ 292] 00:24:40.984 bw ( KiB/s): min=125440, max=393728, per=9.78%, avg=212147.20, stdev=81496.55, samples=20 00:24:40.984 iops : min= 490, max= 1538, avg=828.70, stdev=318.35, samples=20 00:24:40.984 lat (msec) : 2=0.16%, 4=0.83%, 10=1.63%, 20=4.13%, 50=22.22% 00:24:40.984 lat (msec) : 100=43.25%, 250=27.34%, 500=0.46% 00:24:40.984 cpu : usr=0.36%, sys=3.03%, ctx=1846, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=8350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job5: (groupid=0, jobs=1): err= 0: pid=1699383: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=697, BW=174MiB/s (183MB/s)(1760MiB/10099msec) 00:24:40.984 slat (usec): min=11, max=57820, avg=1199.09, stdev=3654.21 00:24:40.984 clat (msec): min=3, max=262, avg=90.51, stdev=36.18 00:24:40.984 lat (msec): min=3, max=262, avg=91.71, stdev=36.78 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 60], 00:24:40.984 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 102], 00:24:40.984 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 150], 00:24:40.984 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 222], 99.95th=[ 257], 00:24:40.984 | 99.99th=[ 262] 00:24:40.984 bw ( KiB/s): min=112640, max=282624, per=8.24%, avg=178636.80, stdev=51975.31, samples=20 00:24:40.984 iops : min= 440, max= 1104, avg=697.80, stdev=203.03, samples=20 00:24:40.984 lat (msec) : 4=0.04%, 10=0.77%, 20=2.05%, 50=9.98%, 100=45.72% 00:24:40.984 lat (msec) : 250=41.39%, 500=0.06% 00:24:40.984 cpu : usr=0.32%, sys=2.70%, ctx=1602, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=7041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job6: (groupid=0, jobs=1): err= 0: pid=1699384: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=827, BW=207MiB/s (217MB/s)(2074MiB/10027msec) 00:24:40.984 slat (usec): min=10, max=87119, avg=829.18, stdev=3699.29 00:24:40.984 clat (usec): min=1291, max=270360, avg=76438.97, stdev=46544.36 00:24:40.984 lat (usec): min=1322, max=291889, avg=77268.15, stdev=47075.10 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 25], 20.00th=[ 31], 00:24:40.984 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 78], 00:24:40.984 | 70.00th=[ 90], 80.00th=[ 120], 90.00th=[ 150], 95.00th=[ 163], 00:24:40.984 | 99.00th=[ 199], 99.50th=[ 224], 99.90th=[ 236], 99.95th=[ 253], 00:24:40.984 | 99.99th=[ 271] 00:24:40.984 bw ( KiB/s): min=73728, max=387584, per=9.72%, avg=210807.25, stdev=81076.22, samples=20 00:24:40.984 iops : min= 288, max= 1514, avg=823.45, stdev=316.71, samples=20 00:24:40.984 lat (msec) : 2=0.08%, 4=0.60%, 10=2.80%, 20=3.16%, 50=23.70% 00:24:40.984 lat (msec) : 100=44.22%, 250=25.38%, 500=0.06% 00:24:40.984 cpu : usr=0.32%, sys=3.01%, ctx=2003, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=8297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job7: (groupid=0, jobs=1): err= 0: pid=1699386: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=884, BW=221MiB/s (232MB/s)(2232MiB/10093msec) 00:24:40.984 slat (usec): min=7, max=48634, avg=655.68, stdev=2717.63 00:24:40.984 clat (usec): min=686, max=213330, avg=71640.38, stdev=43739.27 00:24:40.984 lat (usec): min=712, max=213369, avg=72296.06, stdev=44156.68 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 33], 00:24:40.984 | 30.00th=[ 46], 40.00th=[ 53], 50.00th=[ 63], 60.00th=[ 80], 00:24:40.984 | 70.00th=[ 96], 80.00th=[ 114], 90.00th=[ 133], 95.00th=[ 150], 00:24:40.984 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 207], 00:24:40.984 | 99.99th=[ 213] 00:24:40.984 bw ( KiB/s): min=120832, max=384512, per=10.46%, avg=226892.80, stdev=81887.87, samples=20 00:24:40.984 iops : min= 472, max= 1502, avg=886.30, stdev=319.87, samples=20 00:24:40.984 lat (usec) : 750=0.01%, 1000=0.27% 00:24:40.984 lat (msec) : 2=0.26%, 4=0.96%, 10=5.38%, 20=5.96%, 50=23.34% 00:24:40.984 lat (msec) : 100=36.37%, 250=27.46% 00:24:40.984 cpu : usr=0.30%, sys=3.20%, ctx=2199, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=8926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job8: (groupid=0, jobs=1): err= 0: pid=1699396: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=718, BW=180MiB/s (188MB/s)(1812MiB/10093msec) 00:24:40.984 slat (usec): min=7, max=159572, avg=839.09, stdev=4664.92 00:24:40.984 clat (usec): min=1215, max=254698, avg=88173.48, stdev=54550.27 00:24:40.984 lat (usec): min=1243, max=386429, avg=89012.57, stdev=55262.71 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 28], 00:24:40.984 | 30.00th=[ 50], 40.00th=[ 73], 50.00th=[ 91], 60.00th=[ 109], 00:24:40.984 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 155], 95.00th=[ 178], 00:24:40.984 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 249], 99.95th=[ 251], 00:24:40.984 | 99.99th=[ 255] 00:24:40.984 bw ( KiB/s): min=96256, max=339456, per=8.48%, avg=183961.60, stdev=79274.70, samples=20 00:24:40.984 iops : min= 376, max= 1326, avg=718.60, stdev=309.67, samples=20 00:24:40.984 lat (msec) : 2=0.23%, 4=1.39%, 10=4.21%, 20=7.67%, 50=16.58% 00:24:40.984 lat (msec) : 100=25.88%, 250=44.02%, 500=0.01% 00:24:40.984 cpu : usr=0.23%, sys=2.46%, ctx=1889, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=7249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job9: (groupid=0, jobs=1): err= 0: pid=1699404: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=723, BW=181MiB/s (190MB/s)(1817MiB/10048msec) 00:24:40.984 slat (usec): min=10, max=116899, avg=894.97, stdev=4439.02 00:24:40.984 clat (usec): min=1815, max=246793, avg=87497.31, stdev=53018.43 00:24:40.984 lat (usec): min=1872, max=260798, avg=88392.28, stdev=53610.02 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 30], 00:24:40.984 | 30.00th=[ 51], 40.00th=[ 68], 50.00th=[ 88], 60.00th=[ 105], 00:24:40.984 | 70.00th=[ 118], 80.00th=[ 133], 90.00th=[ 161], 95.00th=[ 180], 00:24:40.984 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 247], 99.95th=[ 247], 00:24:40.984 | 99.99th=[ 247] 00:24:40.984 bw ( KiB/s): min=95232, max=340480, per=8.50%, avg=184473.60, stdev=67150.25, samples=20 00:24:40.984 iops : min= 372, max= 1330, avg=720.60, stdev=262.31, samples=20 00:24:40.984 lat (msec) : 2=0.01%, 4=0.36%, 10=2.93%, 20=6.99%, 50=19.09% 00:24:40.984 lat (msec) : 100=28.34%, 250=42.28% 00:24:40.984 cpu : usr=0.31%, sys=2.60%, ctx=1795, majf=0, minf=4097 00:24:40.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:40.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.984 issued rwts: total=7269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.984 job10: (groupid=0, jobs=1): err= 0: pid=1699409: Mon Jul 15 19:30:49 2024 00:24:40.984 read: IOPS=726, BW=182MiB/s (190MB/s)(1825MiB/10050msec) 00:24:40.984 slat (usec): min=8, max=92921, avg=1041.44, stdev=4267.37 00:24:40.984 clat (usec): min=1579, max=297458, avg=86992.47, stdev=49078.99 00:24:40.984 lat (usec): min=1622, max=297504, avg=88033.91, stdev=49809.41 00:24:40.984 clat percentiles (msec): 00:24:40.984 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 19], 20.00th=[ 41], 00:24:40.984 | 30.00th=[ 59], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 99], 00:24:40.984 | 70.00th=[ 116], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 167], 00:24:40.984 | 99.00th=[ 205], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 255], 00:24:40.984 | 99.99th=[ 296] 00:24:40.984 bw ( KiB/s): min=87040, max=357376, per=8.54%, avg=185222.00, stdev=78164.64, samples=20 00:24:40.984 iops : min= 340, max= 1396, avg=723.50, stdev=305.33, samples=20 00:24:40.984 lat (msec) : 2=0.07%, 4=0.59%, 10=3.27%, 20=7.54%, 50=12.00% 00:24:40.984 lat (msec) : 100=37.54%, 250=38.92%, 500=0.07% 00:24:40.985 cpu : usr=0.17%, sys=2.55%, ctx=1712, majf=0, minf=4097 00:24:40.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:40.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.985 issued rwts: total=7299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.985 00:24:40.985 Run status group 0 (all jobs): 00:24:40.985 READ: bw=2118MiB/s (2221MB/s), 170MiB/s-221MiB/s (178MB/s-232MB/s), io=20.9GiB (22.4GB), run=10026-10101msec 00:24:40.985 00:24:40.985 Disk stats (read/write): 00:24:40.985 nvme0n1: ios=15785/0, merge=0/0, ticks=1228889/0, in_queue=1228889, util=97.35% 00:24:40.985 nvme10n1: ios=13555/0, merge=0/0, ticks=1234796/0, in_queue=1234796, util=97.55% 00:24:40.985 nvme1n1: ios=14756/0, merge=0/0, ticks=1234601/0, in_queue=1234601, util=97.82% 00:24:40.985 nvme2n1: ios=17469/0, merge=0/0, ticks=1235722/0, in_queue=1235722, util=97.94% 00:24:40.985 nvme3n1: ios=16476/0, merge=0/0, ticks=1239145/0, in_queue=1239145, util=98.02% 00:24:40.985 nvme4n1: ios=13920/0, merge=0/0, ticks=1233173/0, in_queue=1233173, util=98.32% 00:24:40.985 nvme5n1: ios=16382/0, merge=0/0, ticks=1241884/0, in_queue=1241884, util=98.46% 00:24:40.985 nvme6n1: ios=17688/0, merge=0/0, ticks=1244996/0, in_queue=1244996, util=98.57% 00:24:40.985 nvme7n1: ios=14351/0, merge=0/0, ticks=1236055/0, in_queue=1236055, util=98.96% 00:24:40.985 nvme8n1: ios=14337/0, merge=0/0, ticks=1244914/0, in_queue=1244914, util=99.11% 00:24:40.985 nvme9n1: ios=14383/0, merge=0/0, ticks=1236919/0, in_queue=1236919, util=99.21% 00:24:40.985 19:30:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:40.985 [global] 00:24:40.985 thread=1 00:24:40.985 invalidate=1 00:24:40.985 rw=randwrite 00:24:40.985 time_based=1 00:24:40.985 runtime=10 00:24:40.985 ioengine=libaio 00:24:40.985 direct=1 00:24:40.985 bs=262144 00:24:40.985 iodepth=64 00:24:40.985 norandommap=1 00:24:40.985 numjobs=1 00:24:40.985 00:24:40.985 [job0] 00:24:40.985 filename=/dev/nvme0n1 00:24:40.985 [job1] 00:24:40.985 filename=/dev/nvme10n1 00:24:40.985 [job2] 00:24:40.985 filename=/dev/nvme1n1 00:24:40.985 [job3] 00:24:40.985 filename=/dev/nvme2n1 00:24:40.985 [job4] 00:24:40.985 filename=/dev/nvme3n1 00:24:40.985 [job5] 00:24:40.985 filename=/dev/nvme4n1 00:24:40.985 [job6] 00:24:40.985 filename=/dev/nvme5n1 00:24:40.985 [job7] 00:24:40.985 filename=/dev/nvme6n1 00:24:40.985 [job8] 00:24:40.985 filename=/dev/nvme7n1 00:24:40.985 [job9] 00:24:40.985 filename=/dev/nvme8n1 00:24:40.985 [job10] 00:24:40.985 filename=/dev/nvme9n1 00:24:40.985 Could not set queue depth (nvme0n1) 00:24:40.985 Could not set queue depth (nvme10n1) 00:24:40.985 Could not set queue depth (nvme1n1) 00:24:40.985 Could not set queue depth (nvme2n1) 00:24:40.985 Could not set queue depth (nvme3n1) 00:24:40.985 Could not set queue depth (nvme4n1) 00:24:40.985 Could not set queue depth (nvme5n1) 00:24:40.985 Could not set queue depth (nvme6n1) 00:24:40.985 Could not set queue depth (nvme7n1) 00:24:40.985 Could not set queue depth (nvme8n1) 00:24:40.985 Could not set queue depth (nvme9n1) 00:24:40.985 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.985 fio-3.35 00:24:40.985 Starting 11 threads 00:24:50.953 00:24:50.953 job0: (groupid=0, jobs=1): err= 0: pid=1700932: Mon Jul 15 19:31:00 2024 00:24:50.953 write: IOPS=513, BW=128MiB/s (135MB/s)(1292MiB/10070msec); 0 zone resets 00:24:50.953 slat (usec): min=20, max=77803, avg=1635.96, stdev=4030.09 00:24:50.953 clat (usec): min=1373, max=207830, avg=123029.80, stdev=51879.99 00:24:50.953 lat (usec): min=1419, max=216088, avg=124665.76, stdev=52665.39 00:24:50.953 clat percentiles (msec): 00:24:50.953 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 73], 00:24:50.953 | 30.00th=[ 122], 40.00th=[ 131], 50.00th=[ 140], 60.00th=[ 150], 00:24:50.953 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:24:50.953 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 199], 99.95th=[ 205], 00:24:50.953 | 99.99th=[ 209] 00:24:50.953 bw ( KiB/s): min=95744, max=352768, per=8.05%, avg=130698.50, stdev=59326.77, samples=20 00:24:50.953 iops : min= 374, max= 1378, avg=510.50, stdev=231.76, samples=20 00:24:50.953 lat (msec) : 2=0.10%, 4=0.50%, 10=1.99%, 20=4.26%, 50=9.81% 00:24:50.953 lat (msec) : 100=8.69%, 250=74.65% 00:24:50.953 cpu : usr=1.15%, sys=1.63%, ctx=2481, majf=0, minf=1 00:24:50.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.953 issued rwts: total=0,5168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.953 job1: (groupid=0, jobs=1): err= 0: pid=1700944: Mon Jul 15 19:31:00 2024 00:24:50.953 write: IOPS=489, BW=122MiB/s (128MB/s)(1245MiB/10174msec); 0 zone resets 00:24:50.953 slat (usec): min=28, max=53154, avg=1789.11, stdev=3712.87 00:24:50.953 clat (msec): min=3, max=423, avg=128.95, stdev=41.81 00:24:50.953 lat (msec): min=4, max=423, avg=130.74, stdev=42.38 00:24:50.953 clat percentiles (msec): 00:24:50.953 | 1.00th=[ 16], 5.00th=[ 51], 10.00th=[ 74], 20.00th=[ 105], 00:24:50.953 | 30.00th=[ 117], 40.00th=[ 129], 50.00th=[ 136], 60.00th=[ 144], 00:24:50.953 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 169], 00:24:50.953 | 99.00th=[ 253], 99.50th=[ 338], 99.90th=[ 414], 99.95th=[ 414], 00:24:50.953 | 99.99th=[ 426] 00:24:50.953 bw ( KiB/s): min=90624, max=184832, per=7.75%, avg=125824.00, stdev=26107.97, samples=20 00:24:50.953 iops : min= 354, max= 722, avg=491.50, stdev=101.98, samples=20 00:24:50.953 lat (msec) : 4=0.02%, 10=0.32%, 20=1.37%, 50=3.23%, 100=12.05% 00:24:50.953 lat (msec) : 250=81.86%, 500=1.15% 00:24:50.953 cpu : usr=1.45%, sys=1.55%, ctx=1929, majf=0, minf=1 00:24:50.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:50.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.953 issued rwts: total=0,4978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.953 job2: (groupid=0, jobs=1): err= 0: pid=1700945: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=942, BW=236MiB/s (247MB/s)(2399MiB/10185msec); 0 zone resets 00:24:50.954 slat (usec): min=23, max=58344, avg=904.86, stdev=1960.65 00:24:50.954 clat (usec): min=1828, max=365065, avg=66976.33, stdev=31924.91 00:24:50.954 lat (msec): min=2, max=365, avg=67.88, stdev=32.19 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 43], 00:24:50.954 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 71], 60.00th=[ 77], 00:24:50.954 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 113], 00:24:50.954 | 99.00th=[ 180], 99.50th=[ 222], 99.90th=[ 338], 99.95th=[ 347], 00:24:50.954 | 99.99th=[ 368] 00:24:50.954 bw ( KiB/s): min=122880, max=373248, per=15.03%, avg=244019.20, stdev=68320.71, samples=20 00:24:50.954 iops : min= 480, max= 1458, avg=953.20, stdev=266.88, samples=20 00:24:50.954 lat (msec) : 2=0.02%, 4=0.09%, 10=0.96%, 20=2.17%, 50=37.69% 00:24:50.954 lat (msec) : 100=49.42%, 250=9.25%, 500=0.40% 00:24:50.954 cpu : usr=2.15%, sys=2.45%, ctx=3400, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,9596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job3: (groupid=0, jobs=1): err= 0: pid=1700947: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=714, BW=179MiB/s (187MB/s)(1817MiB/10173msec); 0 zone resets 00:24:50.954 slat (usec): min=16, max=60515, avg=1084.07, stdev=2656.47 00:24:50.954 clat (usec): min=1246, max=321280, avg=88449.28, stdev=47651.12 00:24:50.954 lat (usec): min=1296, max=321395, avg=89533.35, stdev=48196.34 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 38], 20.00th=[ 42], 00:24:50.954 | 30.00th=[ 56], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 84], 00:24:50.954 | 70.00th=[ 116], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 163], 00:24:50.954 | 99.00th=[ 190], 99.50th=[ 222], 99.90th=[ 288], 99.95th=[ 321], 00:24:50.954 | 99.99th=[ 321] 00:24:50.954 bw ( KiB/s): min=102400, max=377856, per=11.36%, avg=184448.00, stdev=77843.65, samples=20 00:24:50.954 iops : min= 400, max= 1476, avg=720.50, stdev=304.08, samples=20 00:24:50.954 lat (msec) : 2=0.15%, 4=0.62%, 10=2.11%, 20=1.62%, 50=23.76% 00:24:50.954 lat (msec) : 100=35.39%, 250=36.03%, 500=0.32% 00:24:50.954 cpu : usr=1.68%, sys=1.90%, ctx=3415, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,7268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job4: (groupid=0, jobs=1): err= 0: pid=1700950: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=517, BW=129MiB/s (136MB/s)(1318MiB/10178msec); 0 zone resets 00:24:50.954 slat (usec): min=25, max=57861, avg=1669.66, stdev=3733.16 00:24:50.954 clat (usec): min=1768, max=366377, avg=121831.47, stdev=48931.34 00:24:50.954 lat (msec): min=2, max=366, avg=123.50, stdev=49.64 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 50], 20.00th=[ 75], 00:24:50.954 | 30.00th=[ 102], 40.00th=[ 126], 50.00th=[ 138], 60.00th=[ 146], 00:24:50.954 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 176], 00:24:50.954 | 99.00th=[ 239], 99.50th=[ 279], 99.90th=[ 355], 99.95th=[ 359], 00:24:50.954 | 99.99th=[ 368] 00:24:50.954 bw ( KiB/s): min=102912, max=259072, per=8.21%, avg=133340.85, stdev=43604.58, samples=20 00:24:50.954 iops : min= 402, max= 1012, avg=520.85, stdev=170.32, samples=20 00:24:50.954 lat (msec) : 2=0.04%, 4=0.23%, 10=1.35%, 20=2.58%, 50=5.82% 00:24:50.954 lat (msec) : 100=18.93%, 250=70.33%, 500=0.72% 00:24:50.954 cpu : usr=1.20%, sys=1.71%, ctx=2266, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,5271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job5: (groupid=0, jobs=1): err= 0: pid=1700964: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=508, BW=127MiB/s (133MB/s)(1294MiB/10177msec); 0 zone resets 00:24:50.954 slat (usec): min=21, max=116936, avg=1820.01, stdev=4723.37 00:24:50.954 clat (msec): min=12, max=427, avg=123.90, stdev=45.42 00:24:50.954 lat (msec): min=12, max=427, avg=125.72, stdev=45.95 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 32], 5.00th=[ 55], 10.00th=[ 67], 20.00th=[ 82], 00:24:50.954 | 30.00th=[ 107], 40.00th=[ 122], 50.00th=[ 130], 60.00th=[ 133], 00:24:50.954 | 70.00th=[ 138], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 186], 00:24:50.954 | 99.00th=[ 259], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 418], 00:24:50.954 | 99.99th=[ 426] 00:24:50.954 bw ( KiB/s): min=76800, max=238592, per=8.06%, avg=130841.60, stdev=39286.92, samples=20 00:24:50.954 iops : min= 300, max= 932, avg=511.10, stdev=153.46, samples=20 00:24:50.954 lat (msec) : 20=0.17%, 50=3.34%, 100=21.03%, 250=74.39%, 500=1.06% 00:24:50.954 cpu : usr=1.29%, sys=1.19%, ctx=1713, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,5174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job6: (groupid=0, jobs=1): err= 0: pid=1700971: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=469, BW=117MiB/s (123MB/s)(1185MiB/10104msec); 0 zone resets 00:24:50.954 slat (usec): min=25, max=47607, avg=1894.88, stdev=3982.95 00:24:50.954 clat (msec): min=3, max=236, avg=134.47, stdev=41.22 00:24:50.954 lat (msec): min=3, max=236, avg=136.37, stdev=41.80 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 78], 20.00th=[ 115], 00:24:50.954 | 30.00th=[ 129], 40.00th=[ 138], 50.00th=[ 146], 60.00th=[ 153], 00:24:50.954 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 171], 95.00th=[ 178], 00:24:50.954 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 226], 99.95th=[ 236], 00:24:50.954 | 99.99th=[ 236] 00:24:50.954 bw ( KiB/s): min=96256, max=158208, per=7.37%, avg=119705.60, stdev=19256.85, samples=20 00:24:50.954 iops : min= 376, max= 618, avg=467.60, stdev=75.22, samples=20 00:24:50.954 lat (msec) : 4=0.02%, 10=0.87%, 20=3.44%, 50=3.23%, 100=5.51% 00:24:50.954 lat (msec) : 250=86.94% 00:24:50.954 cpu : usr=1.43%, sys=1.33%, ctx=1855, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,4739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job7: (groupid=0, jobs=1): err= 0: pid=1700976: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=565, BW=141MiB/s (148MB/s)(1424MiB/10072msec); 0 zone resets 00:24:50.954 slat (usec): min=25, max=39992, avg=1669.71, stdev=3414.38 00:24:50.954 clat (msec): min=2, max=208, avg=111.42, stdev=41.72 00:24:50.954 lat (msec): min=4, max=208, avg=113.09, stdev=42.28 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 21], 5.00th=[ 43], 10.00th=[ 68], 20.00th=[ 78], 00:24:50.954 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 122], 00:24:50.954 | 70.00th=[ 140], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 176], 00:24:50.954 | 99.00th=[ 188], 99.50th=[ 203], 99.90th=[ 209], 99.95th=[ 209], 00:24:50.954 | 99.99th=[ 209] 00:24:50.954 bw ( KiB/s): min=92160, max=230912, per=8.88%, avg=144230.40, stdev=45090.31, samples=20 00:24:50.954 iops : min= 360, max= 902, avg=563.40, stdev=176.13, samples=20 00:24:50.954 lat (msec) : 4=0.02%, 10=0.18%, 20=0.74%, 50=6.23%, 100=36.97% 00:24:50.954 lat (msec) : 250=55.87% 00:24:50.954 cpu : usr=1.65%, sys=1.66%, ctx=1772, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,5697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job8: (groupid=0, jobs=1): err= 0: pid=1700989: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=523, BW=131MiB/s (137MB/s)(1323MiB/10100msec); 0 zone resets 00:24:50.954 slat (usec): min=22, max=129938, avg=1709.18, stdev=3883.16 00:24:50.954 clat (msec): min=2, max=232, avg=120.42, stdev=40.44 00:24:50.954 lat (msec): min=2, max=232, avg=122.13, stdev=40.97 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 55], 20.00th=[ 103], 00:24:50.954 | 30.00th=[ 115], 40.00th=[ 125], 50.00th=[ 132], 60.00th=[ 133], 00:24:50.954 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 161], 95.00th=[ 165], 00:24:50.954 | 99.00th=[ 197], 99.50th=[ 207], 99.90th=[ 222], 99.95th=[ 222], 00:24:50.954 | 99.99th=[ 232] 00:24:50.954 bw ( KiB/s): min=102400, max=220088, per=8.24%, avg=133833.20, stdev=36492.24, samples=20 00:24:50.954 iops : min= 400, max= 859, avg=522.75, stdev=142.46, samples=20 00:24:50.954 lat (msec) : 4=0.30%, 10=2.17%, 20=2.16%, 50=4.91%, 100=8.51% 00:24:50.954 lat (msec) : 250=81.95% 00:24:50.954 cpu : usr=1.03%, sys=1.46%, ctx=2006, majf=0, minf=1 00:24:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.954 issued rwts: total=0,5290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.954 job9: (groupid=0, jobs=1): err= 0: pid=1700996: Mon Jul 15 19:31:00 2024 00:24:50.954 write: IOPS=524, BW=131MiB/s (138MB/s)(1335MiB/10172msec); 0 zone resets 00:24:50.954 slat (usec): min=11, max=32693, avg=1452.98, stdev=3331.14 00:24:50.954 clat (usec): min=1194, max=338218, avg=120422.59, stdev=47698.55 00:24:50.954 lat (usec): min=1206, max=338275, avg=121875.57, stdev=48265.03 00:24:50.954 clat percentiles (msec): 00:24:50.954 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 31], 20.00th=[ 96], 00:24:50.954 | 30.00th=[ 120], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 136], 00:24:50.954 | 70.00th=[ 148], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 171], 00:24:50.954 | 99.00th=[ 197], 99.50th=[ 218], 99.90th=[ 296], 99.95th=[ 334], 00:24:50.954 | 99.99th=[ 338] 00:24:50.954 bw ( KiB/s): min=100352, max=209408, per=8.31%, avg=135040.00, stdev=29340.24, samples=20 00:24:50.954 iops : min= 392, max= 818, avg=527.50, stdev=114.61, samples=20 00:24:50.954 lat (msec) : 2=0.17%, 4=1.31%, 10=2.53%, 20=3.33%, 50=5.64% 00:24:50.954 lat (msec) : 100=8.45%, 250=78.34%, 500=0.22% 00:24:50.955 cpu : usr=1.14%, sys=1.62%, ctx=2669, majf=0, minf=1 00:24:50.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.955 issued rwts: total=0,5338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.955 job10: (groupid=0, jobs=1): err= 0: pid=1700999: Mon Jul 15 19:31:00 2024 00:24:50.955 write: IOPS=599, BW=150MiB/s (157MB/s)(1524MiB/10167msec); 0 zone resets 00:24:50.955 slat (usec): min=21, max=16816, avg=1260.99, stdev=2922.74 00:24:50.955 clat (msec): min=3, max=341, avg=105.43, stdev=47.75 00:24:50.955 lat (msec): min=3, max=342, avg=106.70, stdev=48.32 00:24:50.955 clat percentiles (msec): 00:24:50.955 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 52], 00:24:50.955 | 30.00th=[ 78], 40.00th=[ 104], 50.00th=[ 123], 60.00th=[ 131], 00:24:50.955 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 163], 00:24:50.955 | 99.00th=[ 197], 99.50th=[ 253], 99.90th=[ 330], 99.95th=[ 334], 00:24:50.955 | 99.99th=[ 342] 00:24:50.955 bw ( KiB/s): min=101888, max=268288, per=9.51%, avg=154432.60, stdev=52952.52, samples=20 00:24:50.955 iops : min= 398, max= 1048, avg=603.25, stdev=206.85, samples=20 00:24:50.955 lat (msec) : 4=0.02%, 10=0.97%, 20=3.35%, 50=15.32%, 100=18.31% 00:24:50.955 lat (msec) : 250=61.51%, 500=0.53% 00:24:50.955 cpu : usr=1.32%, sys=1.84%, ctx=3004, majf=0, minf=1 00:24:50.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.955 issued rwts: total=0,6095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.955 00:24:50.955 Run status group 0 (all jobs): 00:24:50.955 WRITE: bw=1586MiB/s (1663MB/s), 117MiB/s-236MiB/s (123MB/s-247MB/s), io=15.8GiB (16.9GB), run=10070-10185msec 00:24:50.955 00:24:50.955 Disk stats (read/write): 00:24:50.955 nvme0n1: ios=49/9853, merge=0/0, ticks=568/1213651, in_queue=1214219, util=97.96% 00:24:50.955 nvme10n1: ios=34/9935, merge=0/0, ticks=38/1235103, in_queue=1235141, util=97.43% 00:24:50.955 nvme1n1: ios=43/19161, merge=0/0, ticks=1320/1233484, in_queue=1234804, util=99.94% 00:24:50.955 nvme2n1: ios=0/14518, merge=0/0, ticks=0/1240465, in_queue=1240465, util=97.67% 00:24:50.955 nvme3n1: ios=46/10524, merge=0/0, ticks=817/1236512, in_queue=1237329, util=99.94% 00:24:50.955 nvme4n1: ios=46/10332, merge=0/0, ticks=4271/1211807, in_queue=1216078, util=99.93% 00:24:50.955 nvme5n1: ios=48/9265, merge=0/0, ticks=780/1206275, in_queue=1207055, util=99.95% 00:24:50.955 nvme6n1: ios=45/11044, merge=0/0, ticks=1194/1210337, in_queue=1211531, util=99.95% 00:24:50.955 nvme7n1: ios=45/10364, merge=0/0, ticks=1270/1195081, in_queue=1196351, util=99.92% 00:24:50.955 nvme8n1: ios=47/10665, merge=0/0, ticks=292/1243478, in_queue=1243770, util=99.95% 00:24:50.955 nvme9n1: ios=25/12183, merge=0/0, ticks=544/1243532, in_queue=1244076, util=100.00% 00:24:50.955 19:31:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:50.955 19:31:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:50.955 19:31:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.955 19:31:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:50.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:50.955 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.955 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:51.214 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.214 19:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:51.472 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.472 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:51.731 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.731 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:51.990 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.990 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:52.248 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.248 19:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:52.506 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:52.506 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.506 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:52.763 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:52.763 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:52.764 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:52.764 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.021 rmmod nvme_tcp 00:24:53.021 rmmod nvme_fabrics 00:24:53.021 rmmod nvme_keyring 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1692538 ']' 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1692538 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1692538 ']' 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1692538 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1692538 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1692538' 00:24:53.021 killing process with pid 1692538 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1692538 00:24:53.021 19:31:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1692538 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.587 19:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.486 19:31:06 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.486 00:24:55.486 real 1m9.732s 00:24:55.486 user 4m8.993s 00:24:55.486 sys 0m23.242s 00:24:55.486 19:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.486 19:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.486 ************************************ 00:24:55.486 END TEST nvmf_multiconnection 00:24:55.486 ************************************ 00:24:55.486 19:31:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:55.486 19:31:06 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:55.486 19:31:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:55.486 19:31:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.486 19:31:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:55.486 ************************************ 00:24:55.486 START TEST nvmf_initiator_timeout 00:24:55.486 ************************************ 00:24:55.486 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:55.744 * Looking for test storage... 00:24:55.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.744 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.745 19:31:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:01.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.008 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:01.009 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:01.009 Found net devices under 0000:86:00.0: cvl_0_0 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:01.009 Found net devices under 0000:86:00.1: cvl_0_1 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:25:01.009 00:25:01.009 --- 10.0.0.2 ping statistics --- 00:25:01.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.009 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:25:01.009 00:25:01.009 --- 10.0.0.1 ping statistics --- 00:25:01.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.009 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1706386 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1706386 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1706386 ']' 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.009 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.009 [2024-07-15 19:31:11.739137] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:25:01.009 [2024-07-15 19:31:11.739180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.009 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.009 [2024-07-15 19:31:11.771953] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:01.009 [2024-07-15 19:31:11.800275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.009 [2024-07-15 19:31:11.842040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.009 [2024-07-15 19:31:11.842076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.009 [2024-07-15 19:31:11.842083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.009 [2024-07-15 19:31:11.842089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.009 [2024-07-15 19:31:11.842094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.009 [2024-07-15 19:31:11.842142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.009 [2024-07-15 19:31:11.842273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.009 [2024-07-15 19:31:11.842295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.009 [2024-07-15 19:31:11.842297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 Malloc0 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 Delay0 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 [2024-07-15 19:31:12.017195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.267 [2024-07-15 19:31:12.041925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.267 19:31:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:02.647 19:31:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:02.647 19:31:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.647 19:31:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.647 19:31:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.647 19:31:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1707007 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:04.553 19:31:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:04.553 [global] 00:25:04.553 thread=1 00:25:04.553 invalidate=1 00:25:04.553 rw=write 00:25:04.553 time_based=1 00:25:04.553 runtime=60 00:25:04.553 ioengine=libaio 00:25:04.553 direct=1 00:25:04.553 bs=4096 00:25:04.553 iodepth=1 00:25:04.553 norandommap=0 00:25:04.553 numjobs=1 00:25:04.553 00:25:04.553 verify_dump=1 00:25:04.553 verify_backlog=512 00:25:04.553 verify_state_save=0 00:25:04.553 do_verify=1 00:25:04.553 verify=crc32c-intel 00:25:04.553 [job0] 00:25:04.553 filename=/dev/nvme0n1 00:25:04.553 Could not set queue depth (nvme0n1) 00:25:04.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.809 fio-3.35 00:25:04.809 Starting 1 thread 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.085 true 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.085 true 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.085 true 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:08.085 true 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.085 19:31:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.604 true 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.604 true 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.604 true 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.604 true 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:10.604 19:31:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1707007 00:26:06.794 00:26:06.794 job0: (groupid=0, jobs=1): err= 0: pid=1707220: Mon Jul 15 19:32:15 2024 00:26:06.794 read: IOPS=105, BW=421KiB/s (431kB/s)(24.7MiB/60036msec) 00:26:06.794 slat (usec): min=6, max=11596, avg=12.09, stdev=188.19 00:26:06.794 clat (usec): min=233, max=41579k, avg=9234.89, stdev=523401.92 00:26:06.794 lat (usec): min=241, max=41579k, avg=9246.98, stdev=523401.94 00:26:06.794 clat percentiles (usec): 00:26:06.794 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 297], 00:26:06.794 | 20.00th=[ 310], 30.00th=[ 314], 40.00th=[ 322], 00:26:06.794 | 50.00th=[ 326], 60.00th=[ 330], 70.00th=[ 338], 00:26:06.795 | 80.00th=[ 355], 90.00th=[ 383], 95.00th=[ 40633], 00:26:06.795 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:26:06.795 | 99.95th=[ 43254], 99.99th=[17112761] 00:26:06.795 write: IOPS=110, BW=443KiB/s (454kB/s)(26.0MiB/60036msec); 0 zone resets 00:26:06.795 slat (nsec): min=9427, max=43790, avg=11502.91, stdev=1871.84 00:26:06.795 clat (usec): min=173, max=3808, avg=232.79, stdev=52.86 00:26:06.795 lat (usec): min=183, max=3821, avg=244.30, stdev=52.94 00:26:06.795 clat percentiles (usec): 00:26:06.795 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:26:06.795 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:26:06.795 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 318], 00:26:06.795 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 371], 99.95th=[ 388], 00:26:06.795 | 99.99th=[ 3818] 00:26:06.795 bw ( KiB/s): min= 1248, max= 8192, per=100.00%, avg=5916.44, stdev=2631.21, samples=9 00:26:06.795 iops : min= 312, max= 2048, avg=1479.11, stdev=657.80, samples=9 00:26:06.795 lat (usec) : 250=46.01%, 500=51.09%, 750=0.09%, 1000=0.01% 00:26:06.795 lat (msec) : 4=0.02%, 50=2.78%, >=2000=0.01% 00:26:06.795 cpu : usr=0.20%, sys=0.34%, ctx=12970, majf=0, minf=2 00:26:06.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.795 issued rwts: total=6312,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:06.795 00:26:06.795 Run status group 0 (all jobs): 00:26:06.795 READ: bw=421KiB/s (431kB/s), 421KiB/s-421KiB/s (431kB/s-431kB/s), io=24.7MiB (25.9MB), run=60036-60036msec 00:26:06.795 WRITE: bw=443KiB/s (454kB/s), 443KiB/s-443KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60036-60036msec 00:26:06.795 00:26:06.795 Disk stats (read/write): 00:26:06.795 nvme0n1: ios=6407/6656, merge=0/0, ticks=16569/1457, in_queue=18026, util=99.88% 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:06.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:06.795 nvmf hotplug test: fio successful as expected 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.795 rmmod nvme_tcp 00:26:06.795 rmmod nvme_fabrics 00:26:06.795 rmmod nvme_keyring 00:26:06.795 19:32:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1706386 ']' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1706386 ']' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1706386' 00:26:06.795 killing process with pid 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1706386 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.795 19:32:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.734 19:32:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:07.734 00:26:07.734 real 1m12.009s 00:26:07.734 user 4m22.577s 00:26:07.734 sys 0m5.824s 00:26:07.734 19:32:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.734 19:32:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:07.734 ************************************ 00:26:07.734 END TEST nvmf_initiator_timeout 00:26:07.734 ************************************ 00:26:07.734 19:32:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:07.734 19:32:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:07.734 19:32:18 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:07.734 19:32:18 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:07.734 19:32:18 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.734 19:32:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:13.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:13.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:13.012 Found net devices under 0000:86:00.0: cvl_0_0 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:13.012 Found net devices under 0000:86:00.1: cvl_0_1 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:13.012 19:32:23 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:13.012 19:32:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.012 19:32:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.012 19:32:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 ************************************ 00:26:13.012 START TEST nvmf_perf_adq 00:26:13.012 ************************************ 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:13.012 * Looking for test storage... 00:26:13.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.012 19:32:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:18.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:18.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:18.331 Found net devices under 0000:86:00.0: cvl_0_0 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:18.331 Found net devices under 0000:86:00.1: cvl_0_1 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:18.331 19:32:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:18.900 19:32:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:20.803 19:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:26.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.073 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:26.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:26.074 Found net devices under 0000:86:00.0: cvl_0_0 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:26.074 Found net devices under 0000:86:00.1: cvl_0_1 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:26:26.074 00:26:26.074 --- 10.0.0.2 ping statistics --- 00:26:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.074 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:26:26.074 00:26:26.074 --- 10.0.0.1 ping statistics --- 00:26:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.074 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1724562 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1724562 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1724562 ']' 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.074 19:32:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.074 [2024-07-15 19:32:36.889765] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:26:26.074 [2024-07-15 19:32:36.889806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.074 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.074 [2024-07-15 19:32:36.919033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:26.331 [2024-07-15 19:32:36.947081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.331 [2024-07-15 19:32:36.989001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.331 [2024-07-15 19:32:36.989039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.331 [2024-07-15 19:32:36.989046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.331 [2024-07-15 19:32:36.989052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.331 [2024-07-15 19:32:36.989057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.331 [2024-07-15 19:32:36.989103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.331 [2024-07-15 19:32:36.989197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.331 [2024-07-15 19:32:36.989283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.331 [2024-07-15 19:32:36.989285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.331 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 [2024-07-15 19:32:37.207020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 Malloc1 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.589 [2024-07-15 19:32:37.258605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1724738 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:26.589 19:32:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:26.589 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.480 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:28.480 19:32:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.480 19:32:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:28.480 19:32:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.480 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:28.480 "tick_rate": 2300000000, 00:26:28.480 "poll_groups": [ 00:26:28.480 { 00:26:28.480 "name": "nvmf_tgt_poll_group_000", 00:26:28.480 "admin_qpairs": 1, 00:26:28.480 "io_qpairs": 1, 00:26:28.480 "current_admin_qpairs": 1, 00:26:28.480 "current_io_qpairs": 1, 00:26:28.480 "pending_bdev_io": 0, 00:26:28.480 "completed_nvme_io": 20855, 00:26:28.480 "transports": [ 00:26:28.480 { 00:26:28.480 "trtype": "TCP" 00:26:28.480 } 00:26:28.480 ] 00:26:28.480 }, 00:26:28.480 { 00:26:28.480 "name": "nvmf_tgt_poll_group_001", 00:26:28.480 "admin_qpairs": 0, 00:26:28.480 "io_qpairs": 1, 00:26:28.480 "current_admin_qpairs": 0, 00:26:28.480 "current_io_qpairs": 1, 00:26:28.480 "pending_bdev_io": 0, 00:26:28.480 "completed_nvme_io": 21236, 00:26:28.480 "transports": [ 00:26:28.480 { 00:26:28.480 "trtype": "TCP" 00:26:28.480 } 00:26:28.480 ] 00:26:28.480 }, 00:26:28.480 { 00:26:28.480 "name": "nvmf_tgt_poll_group_002", 00:26:28.480 "admin_qpairs": 0, 00:26:28.480 "io_qpairs": 1, 00:26:28.480 "current_admin_qpairs": 0, 00:26:28.480 "current_io_qpairs": 1, 00:26:28.480 "pending_bdev_io": 0, 00:26:28.480 "completed_nvme_io": 21077, 00:26:28.480 "transports": [ 00:26:28.480 { 00:26:28.481 "trtype": "TCP" 00:26:28.481 } 00:26:28.481 ] 00:26:28.481 }, 00:26:28.481 { 00:26:28.481 "name": "nvmf_tgt_poll_group_003", 00:26:28.481 "admin_qpairs": 0, 00:26:28.481 "io_qpairs": 1, 00:26:28.481 "current_admin_qpairs": 0, 00:26:28.481 "current_io_qpairs": 1, 00:26:28.481 "pending_bdev_io": 0, 00:26:28.481 "completed_nvme_io": 20821, 00:26:28.481 "transports": [ 00:26:28.481 { 00:26:28.481 "trtype": "TCP" 00:26:28.481 } 00:26:28.481 ] 00:26:28.481 } 00:26:28.481 ] 00:26:28.481 }' 00:26:28.481 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:28.481 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:28.736 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:28.736 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:28.736 19:32:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1724738 00:26:36.831 Initializing NVMe Controllers 00:26:36.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:36.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:36.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:36.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:36.831 Initialization complete. Launching workers. 00:26:36.831 ======================================================== 00:26:36.831 Latency(us) 00:26:36.831 Device Information : IOPS MiB/s Average min max 00:26:36.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10778.80 42.10 5938.33 2141.84 10596.62 00:26:36.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11055.30 43.18 5788.39 2784.80 10135.53 00:26:36.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10933.00 42.71 5854.59 2176.75 10242.12 00:26:36.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10840.70 42.35 5903.29 2689.40 10357.70 00:26:36.831 ======================================================== 00:26:36.831 Total : 43607.79 170.34 5870.61 2141.84 10596.62 00:26:36.831 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.831 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.831 rmmod nvme_tcp 00:26:36.831 rmmod nvme_fabrics 00:26:36.831 rmmod nvme_keyring 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1724562 ']' 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1724562 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1724562 ']' 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1724562 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1724562 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1724562' 00:26:36.832 killing process with pid 1724562 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1724562 00:26:36.832 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1724562 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.090 19:32:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.994 19:32:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.994 19:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:38.994 19:32:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:40.370 19:32:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:42.356 19:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:47.622 19:32:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:47.622 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:47.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:47.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:47.623 Found net devices under 0000:86:00.0: cvl_0_0 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:47.623 Found net devices under 0000:86:00.1: cvl_0_1 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.623 19:32:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:47.623 00:26:47.623 --- 10.0.0.2 ping statistics --- 00:26:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.623 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:47.623 00:26:47.623 --- 10.0.0.1 ping statistics --- 00:26:47.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.623 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:47.623 net.core.busy_poll = 1 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:47.623 net.core.busy_read = 1 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:47.623 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1728368 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1728368 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1728368 ']' 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:47.624 [2024-07-15 19:32:58.335758] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:26:47.624 [2024-07-15 19:32:58.335800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.624 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.624 [2024-07-15 19:32:58.364884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:47.624 [2024-07-15 19:32:58.393214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.624 [2024-07-15 19:32:58.434847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.624 [2024-07-15 19:32:58.434889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.624 [2024-07-15 19:32:58.434896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.624 [2024-07-15 19:32:58.434901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.624 [2024-07-15 19:32:58.434906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.624 [2024-07-15 19:32:58.434950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.624 [2024-07-15 19:32:58.435051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.624 [2024-07-15 19:32:58.435136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.624 [2024-07-15 19:32:58.435137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.624 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 [2024-07-15 19:32:58.646004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 Malloc1 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.883 [2024-07-15 19:32:58.689424] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1728548 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:47.883 19:32:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:47.883 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.411 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:50.411 19:33:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.411 19:33:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.411 19:33:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.411 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:50.411 "tick_rate": 2300000000, 00:26:50.411 "poll_groups": [ 00:26:50.411 { 00:26:50.411 "name": "nvmf_tgt_poll_group_000", 00:26:50.411 "admin_qpairs": 1, 00:26:50.411 "io_qpairs": 0, 00:26:50.411 "current_admin_qpairs": 1, 00:26:50.411 "current_io_qpairs": 0, 00:26:50.411 "pending_bdev_io": 0, 00:26:50.411 "completed_nvme_io": 0, 00:26:50.411 "transports": [ 00:26:50.411 { 00:26:50.411 "trtype": "TCP" 00:26:50.411 } 00:26:50.411 ] 00:26:50.411 }, 00:26:50.411 { 00:26:50.411 "name": "nvmf_tgt_poll_group_001", 00:26:50.411 "admin_qpairs": 0, 00:26:50.411 "io_qpairs": 4, 00:26:50.411 "current_admin_qpairs": 0, 00:26:50.411 "current_io_qpairs": 4, 00:26:50.411 "pending_bdev_io": 0, 00:26:50.411 "completed_nvme_io": 45122, 00:26:50.411 "transports": [ 00:26:50.412 { 00:26:50.412 "trtype": "TCP" 00:26:50.412 } 00:26:50.412 ] 00:26:50.412 }, 00:26:50.412 { 00:26:50.412 "name": "nvmf_tgt_poll_group_002", 00:26:50.412 "admin_qpairs": 0, 00:26:50.412 "io_qpairs": 0, 00:26:50.412 "current_admin_qpairs": 0, 00:26:50.412 "current_io_qpairs": 0, 00:26:50.412 "pending_bdev_io": 0, 00:26:50.412 "completed_nvme_io": 0, 00:26:50.412 "transports": [ 00:26:50.412 { 00:26:50.412 "trtype": "TCP" 00:26:50.412 } 00:26:50.412 ] 00:26:50.412 }, 00:26:50.412 { 00:26:50.412 "name": "nvmf_tgt_poll_group_003", 00:26:50.412 "admin_qpairs": 0, 00:26:50.412 "io_qpairs": 0, 00:26:50.412 "current_admin_qpairs": 0, 00:26:50.412 "current_io_qpairs": 0, 00:26:50.412 "pending_bdev_io": 0, 00:26:50.412 "completed_nvme_io": 0, 00:26:50.412 "transports": [ 00:26:50.412 { 00:26:50.412 "trtype": "TCP" 00:26:50.412 } 00:26:50.412 ] 00:26:50.412 } 00:26:50.412 ] 00:26:50.412 }' 00:26:50.412 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:50.412 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:50.412 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:26:50.412 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:26:50.412 19:33:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1728548 00:26:58.517 Initializing NVMe Controllers 00:26:58.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:58.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:58.517 Initialization complete. Launching workers. 00:26:58.517 ======================================================== 00:26:58.517 Latency(us) 00:26:58.517 Device Information : IOPS MiB/s Average min max 00:26:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6034.90 23.57 10607.47 1435.67 57591.00 00:26:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6524.30 25.49 9811.62 1309.72 55486.61 00:26:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5470.90 21.37 11702.33 1476.59 57113.68 00:26:58.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5361.10 20.94 11941.53 1769.41 56830.31 00:26:58.517 ======================================================== 00:26:58.517 Total : 23391.20 91.37 10947.32 1309.72 57591.00 00:26:58.517 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.517 rmmod nvme_tcp 00:26:58.517 rmmod nvme_fabrics 00:26:58.517 rmmod nvme_keyring 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1728368 ']' 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1728368 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1728368 ']' 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1728368 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1728368 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1728368' 00:26:58.517 killing process with pid 1728368 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1728368 00:26:58.517 19:33:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1728368 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.517 19:33:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.825 19:33:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:01.825 19:33:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:01.825 00:27:01.825 real 0m48.802s 00:27:01.825 user 2m43.073s 00:27:01.825 sys 0m9.268s 00:27:01.825 19:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.825 19:33:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.825 ************************************ 00:27:01.825 END TEST nvmf_perf_adq 00:27:01.825 ************************************ 00:27:01.825 19:33:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.825 19:33:12 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:01.825 19:33:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.825 19:33:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.825 19:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.825 ************************************ 00:27:01.825 START TEST nvmf_shutdown 00:27:01.825 ************************************ 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:01.825 * Looking for test storage... 00:27:01.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.825 ************************************ 00:27:01.825 START TEST nvmf_shutdown_tc1 00:27:01.825 ************************************ 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.825 19:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.082 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:07.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:07.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:07.083 Found net devices under 0000:86:00.0: cvl_0_0 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:07.083 Found net devices under 0000:86:00.1: cvl_0_1 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.083 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.341 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.341 19:33:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:07.341 00:27:07.341 --- 10.0.0.2 ping statistics --- 00:27:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.341 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:07.341 00:27:07.341 --- 10.0.0.1 ping statistics --- 00:27:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.341 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.341 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1734341 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1734341 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1734341 ']' 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.342 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.342 [2024-07-15 19:33:18.112887] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:07.342 [2024-07-15 19:33:18.112929] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.342 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.342 [2024-07-15 19:33:18.146261] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:07.342 [2024-07-15 19:33:18.169574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.600 [2024-07-15 19:33:18.210388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.600 [2024-07-15 19:33:18.210427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.600 [2024-07-15 19:33:18.210433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.600 [2024-07-15 19:33:18.210439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.600 [2024-07-15 19:33:18.210445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.600 [2024-07-15 19:33:18.210571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.600 [2024-07-15 19:33:18.210654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.600 [2024-07-15 19:33:18.210760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.600 [2024-07-15 19:33:18.210762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 [2024-07-15 19:33:18.357342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.600 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.600 Malloc1 00:27:07.600 [2024-07-15 19:33:18.452991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.857 Malloc2 00:27:07.857 Malloc3 00:27:07.857 Malloc4 00:27:07.857 Malloc5 00:27:07.857 Malloc6 00:27:07.857 Malloc7 00:27:08.116 Malloc8 00:27:08.116 Malloc9 00:27:08.116 Malloc10 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1734596 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1734596 /var/tmp/bdevperf.sock 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1734596 ']' 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.116 { 00:27:08.116 "params": { 00:27:08.116 "name": "Nvme$subsystem", 00:27:08.116 "trtype": "$TEST_TRANSPORT", 00:27:08.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.116 "adrfam": "ipv4", 00:27:08.116 "trsvcid": "$NVMF_PORT", 00:27:08.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.116 "hdgst": ${hdgst:-false}, 00:27:08.116 "ddgst": ${ddgst:-false} 00:27:08.116 }, 00:27:08.116 "method": "bdev_nvme_attach_controller" 00:27:08.116 } 00:27:08.116 EOF 00:27:08.116 )") 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.116 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 [2024-07-15 19:33:18.922302] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:08.117 [2024-07-15 19:33:18.922348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.117 { 00:27:08.117 "params": { 00:27:08.117 "name": "Nvme$subsystem", 00:27:08.117 "trtype": "$TEST_TRANSPORT", 00:27:08.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.117 "adrfam": "ipv4", 00:27:08.117 "trsvcid": "$NVMF_PORT", 00:27:08.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.117 "hdgst": ${hdgst:-false}, 00:27:08.117 "ddgst": ${ddgst:-false} 00:27:08.117 }, 00:27:08.117 "method": "bdev_nvme_attach_controller" 00:27:08.117 } 00:27:08.117 EOF 00:27:08.117 )") 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.117 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.118 { 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme$subsystem", 00:27:08.118 "trtype": "$TEST_TRANSPORT", 00:27:08.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "$NVMF_PORT", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.118 "hdgst": ${hdgst:-false}, 00:27:08.118 "ddgst": ${ddgst:-false} 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 } 00:27:08.118 EOF 00:27:08.118 )") 00:27:08.118 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.118 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:08.118 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.118 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:08.118 19:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme1", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme2", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme3", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme4", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme5", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme6", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme7", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme8", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme9", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 },{ 00:27:08.118 "params": { 00:27:08.118 "name": "Nvme10", 00:27:08.118 "trtype": "tcp", 00:27:08.118 "traddr": "10.0.0.2", 00:27:08.118 "adrfam": "ipv4", 00:27:08.118 "trsvcid": "4420", 00:27:08.118 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:08.118 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:08.118 "hdgst": false, 00:27:08.118 "ddgst": false 00:27:08.118 }, 00:27:08.118 "method": "bdev_nvme_attach_controller" 00:27:08.118 }' 00:27:08.118 [2024-07-15 19:33:18.949886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.398 [2024-07-15 19:33:18.979080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.398 [2024-07-15 19:33:19.018974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1734596 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:10.336 19:33:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:11.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1734596 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1734341 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.269 { 00:27:11.269 "params": { 00:27:11.269 "name": "Nvme$subsystem", 00:27:11.269 "trtype": "$TEST_TRANSPORT", 00:27:11.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.269 "adrfam": "ipv4", 00:27:11.269 "trsvcid": "$NVMF_PORT", 00:27:11.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.269 "hdgst": ${hdgst:-false}, 00:27:11.269 "ddgst": ${ddgst:-false} 00:27:11.269 }, 00:27:11.269 "method": "bdev_nvme_attach_controller" 00:27:11.269 } 00:27:11.269 EOF 00:27:11.269 )") 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.269 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.269 { 00:27:11.269 "params": { 00:27:11.269 "name": "Nvme$subsystem", 00:27:11.269 "trtype": "$TEST_TRANSPORT", 00:27:11.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.269 "adrfam": "ipv4", 00:27:11.269 "trsvcid": "$NVMF_PORT", 00:27:11.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 [2024-07-15 19:33:21.817769] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:11.270 [2024-07-15 19:33:21.817818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735096 ] 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.270 { 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme$subsystem", 00:27:11.270 "trtype": "$TEST_TRANSPORT", 00:27:11.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "$NVMF_PORT", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.270 "hdgst": ${hdgst:-false}, 00:27:11.270 "ddgst": ${ddgst:-false} 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 } 00:27:11.270 EOF 00:27:11.270 )") 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:11.270 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:11.270 19:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme1", 00:27:11.270 "trtype": "tcp", 00:27:11.270 "traddr": "10.0.0.2", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "4420", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.270 "hdgst": false, 00:27:11.270 "ddgst": false 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 },{ 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme2", 00:27:11.270 "trtype": "tcp", 00:27:11.270 "traddr": "10.0.0.2", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "4420", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:11.270 "hdgst": false, 00:27:11.270 "ddgst": false 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 },{ 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme3", 00:27:11.270 "trtype": "tcp", 00:27:11.270 "traddr": "10.0.0.2", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "4420", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:11.270 "hdgst": false, 00:27:11.270 "ddgst": false 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 },{ 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme4", 00:27:11.270 "trtype": "tcp", 00:27:11.270 "traddr": "10.0.0.2", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "4420", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:11.270 "hdgst": false, 00:27:11.270 "ddgst": false 00:27:11.270 }, 00:27:11.270 "method": "bdev_nvme_attach_controller" 00:27:11.270 },{ 00:27:11.270 "params": { 00:27:11.270 "name": "Nvme5", 00:27:11.270 "trtype": "tcp", 00:27:11.270 "traddr": "10.0.0.2", 00:27:11.270 "adrfam": "ipv4", 00:27:11.270 "trsvcid": "4420", 00:27:11.270 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:11.270 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:11.270 "hdgst": false, 00:27:11.270 "ddgst": false 00:27:11.270 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 },{ 00:27:11.271 "params": { 00:27:11.271 "name": "Nvme6", 00:27:11.271 "trtype": "tcp", 00:27:11.271 "traddr": "10.0.0.2", 00:27:11.271 "adrfam": "ipv4", 00:27:11.271 "trsvcid": "4420", 00:27:11.271 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:11.271 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:11.271 "hdgst": false, 00:27:11.271 "ddgst": false 00:27:11.271 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 },{ 00:27:11.271 "params": { 00:27:11.271 "name": "Nvme7", 00:27:11.271 "trtype": "tcp", 00:27:11.271 "traddr": "10.0.0.2", 00:27:11.271 "adrfam": "ipv4", 00:27:11.271 "trsvcid": "4420", 00:27:11.271 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:11.271 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:11.271 "hdgst": false, 00:27:11.271 "ddgst": false 00:27:11.271 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 },{ 00:27:11.271 "params": { 00:27:11.271 "name": "Nvme8", 00:27:11.271 "trtype": "tcp", 00:27:11.271 "traddr": "10.0.0.2", 00:27:11.271 "adrfam": "ipv4", 00:27:11.271 "trsvcid": "4420", 00:27:11.271 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:11.271 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:11.271 "hdgst": false, 00:27:11.271 "ddgst": false 00:27:11.271 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 },{ 00:27:11.271 "params": { 00:27:11.271 "name": "Nvme9", 00:27:11.271 "trtype": "tcp", 00:27:11.271 "traddr": "10.0.0.2", 00:27:11.271 "adrfam": "ipv4", 00:27:11.271 "trsvcid": "4420", 00:27:11.271 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:11.271 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:11.271 "hdgst": false, 00:27:11.271 "ddgst": false 00:27:11.271 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 },{ 00:27:11.271 "params": { 00:27:11.271 "name": "Nvme10", 00:27:11.271 "trtype": "tcp", 00:27:11.271 "traddr": "10.0.0.2", 00:27:11.271 "adrfam": "ipv4", 00:27:11.271 "trsvcid": "4420", 00:27:11.271 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:11.271 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:11.271 "hdgst": false, 00:27:11.271 "ddgst": false 00:27:11.271 }, 00:27:11.271 "method": "bdev_nvme_attach_controller" 00:27:11.271 }' 00:27:11.271 [2024-07-15 19:33:21.845908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:11.271 [2024-07-15 19:33:21.873781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.271 [2024-07-15 19:33:21.914097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.643 Running I/O for 1 seconds... 00:27:14.010 00:27:14.010 Latency(us) 00:27:14.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.010 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme1n1 : 1.14 279.69 17.48 0.00 0.00 226361.70 17096.35 212450.62 00:27:14.010 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme2n1 : 1.15 279.08 17.44 0.00 0.00 224048.53 16526.47 194214.51 00:27:14.010 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme3n1 : 1.16 276.45 17.28 0.00 0.00 223176.97 15842.62 217009.64 00:27:14.010 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme4n1 : 1.14 280.36 17.52 0.00 0.00 216648.84 15614.66 217921.45 00:27:14.010 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme5n1 : 1.17 274.03 17.13 0.00 0.00 218936.90 18008.15 216097.84 00:27:14.010 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme6n1 : 1.17 273.45 17.09 0.00 0.00 216225.17 16412.49 219745.06 00:27:14.010 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme7n1 : 1.16 274.75 17.17 0.00 0.00 211981.89 17096.35 216097.84 00:27:14.010 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme8n1 : 1.15 279.75 17.48 0.00 0.00 204618.42 1759.50 208803.39 00:27:14.010 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme9n1 : 1.18 271.96 17.00 0.00 0.00 208113.31 17210.32 238892.97 00:27:14.010 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.010 Verification LBA range: start 0x0 length 0x400 00:27:14.010 Nvme10n1 : 1.17 272.68 17.04 0.00 0.00 204377.04 19033.93 224304.08 00:27:14.010 =================================================================================================================== 00:27:14.010 Total : 2762.21 172.64 0.00 0.00 215438.73 1759.50 238892.97 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.010 rmmod nvme_tcp 00:27:14.010 rmmod nvme_fabrics 00:27:14.010 rmmod nvme_keyring 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1734341 ']' 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1734341 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1734341 ']' 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1734341 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1734341 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1734341' 00:27:14.010 killing process with pid 1734341 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1734341 00:27:14.010 19:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1734341 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.636 19:33:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.538 00:27:16.538 real 0m14.832s 00:27:16.538 user 0m33.651s 00:27:16.538 sys 0m5.464s 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:16.538 ************************************ 00:27:16.538 END TEST nvmf_shutdown_tc1 00:27:16.538 ************************************ 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:16.538 ************************************ 00:27:16.538 START TEST nvmf_shutdown_tc2 00:27:16.538 ************************************ 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.538 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.539 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.539 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.539 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.798 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:16.799 00:27:16.799 --- 10.0.0.2 ping statistics --- 00:27:16.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.799 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:16.799 00:27:16.799 --- 10.0.0.1 ping statistics --- 00:27:16.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.799 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1736120 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1736120 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1736120 ']' 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.799 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.058 [2024-07-15 19:33:27.694022] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:17.058 [2024-07-15 19:33:27.694061] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.058 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.058 [2024-07-15 19:33:27.723068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:17.058 [2024-07-15 19:33:27.749393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.058 [2024-07-15 19:33:27.788999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.058 [2024-07-15 19:33:27.789037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.058 [2024-07-15 19:33:27.789044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.058 [2024-07-15 19:33:27.789049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.058 [2024-07-15 19:33:27.789054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.058 [2024-07-15 19:33:27.789100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.058 [2024-07-15 19:33:27.789187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.058 [2024-07-15 19:33:27.789717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.058 [2024-07-15 19:33:27.789718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.058 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.058 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:17.058 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.058 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.058 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.316 [2024-07-15 19:33:27.940237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.316 19:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.316 Malloc1 00:27:17.316 [2024-07-15 19:33:28.036053] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.316 Malloc2 00:27:17.317 Malloc3 00:27:17.317 Malloc4 00:27:17.574 Malloc5 00:27:17.574 Malloc6 00:27:17.574 Malloc7 00:27:17.574 Malloc8 00:27:17.574 Malloc9 00:27:17.574 Malloc10 00:27:17.574 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.574 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:17.574 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.574 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1736181 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1736181 /var/tmp/bdevperf.sock 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1736181 ']' 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.833 { 00:27:17.833 "params": { 00:27:17.833 "name": "Nvme$subsystem", 00:27:17.833 "trtype": "$TEST_TRANSPORT", 00:27:17.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.833 "adrfam": "ipv4", 00:27:17.833 "trsvcid": "$NVMF_PORT", 00:27:17.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.833 "hdgst": ${hdgst:-false}, 00:27:17.833 "ddgst": ${ddgst:-false} 00:27:17.833 }, 00:27:17.833 "method": "bdev_nvme_attach_controller" 00:27:17.833 } 00:27:17.833 EOF 00:27:17.833 )") 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.833 { 00:27:17.833 "params": { 00:27:17.833 "name": "Nvme$subsystem", 00:27:17.833 "trtype": "$TEST_TRANSPORT", 00:27:17.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.833 "adrfam": "ipv4", 00:27:17.833 "trsvcid": "$NVMF_PORT", 00:27:17.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.833 "hdgst": ${hdgst:-false}, 00:27:17.833 "ddgst": ${ddgst:-false} 00:27:17.833 }, 00:27:17.833 "method": "bdev_nvme_attach_controller" 00:27:17.833 } 00:27:17.833 EOF 00:27:17.833 )") 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.833 { 00:27:17.833 "params": { 00:27:17.833 "name": "Nvme$subsystem", 00:27:17.833 "trtype": "$TEST_TRANSPORT", 00:27:17.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.833 "adrfam": "ipv4", 00:27:17.833 "trsvcid": "$NVMF_PORT", 00:27:17.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.833 "hdgst": ${hdgst:-false}, 00:27:17.833 "ddgst": ${ddgst:-false} 00:27:17.833 }, 00:27:17.833 "method": "bdev_nvme_attach_controller" 00:27:17.833 } 00:27:17.833 EOF 00:27:17.833 )") 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.833 { 00:27:17.833 "params": { 00:27:17.833 "name": "Nvme$subsystem", 00:27:17.833 "trtype": "$TEST_TRANSPORT", 00:27:17.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.833 "adrfam": "ipv4", 00:27:17.833 "trsvcid": "$NVMF_PORT", 00:27:17.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.833 "hdgst": ${hdgst:-false}, 00:27:17.833 "ddgst": ${ddgst:-false} 00:27:17.833 }, 00:27:17.833 "method": "bdev_nvme_attach_controller" 00:27:17.833 } 00:27:17.833 EOF 00:27:17.833 )") 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.833 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.833 { 00:27:17.833 "params": { 00:27:17.833 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.834 { 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.834 { 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 [2024-07-15 19:33:28.506492] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:17.834 [2024-07-15 19:33:28.506540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736181 ] 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.834 { 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.834 { 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.834 { 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme$subsystem", 00:27:17.834 "trtype": "$TEST_TRANSPORT", 00:27:17.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "$NVMF_PORT", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.834 "hdgst": ${hdgst:-false}, 00:27:17.834 "ddgst": ${ddgst:-false} 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 } 00:27:17.834 EOF 00:27:17.834 )") 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:17.834 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:17.834 19:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme1", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme2", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme3", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme4", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme5", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme6", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme7", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme8", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme9", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.834 "method": "bdev_nvme_attach_controller" 00:27:17.834 },{ 00:27:17.834 "params": { 00:27:17.834 "name": "Nvme10", 00:27:17.834 "trtype": "tcp", 00:27:17.834 "traddr": "10.0.0.2", 00:27:17.834 "adrfam": "ipv4", 00:27:17.834 "trsvcid": "4420", 00:27:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:17.834 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:17.834 "hdgst": false, 00:27:17.834 "ddgst": false 00:27:17.834 }, 00:27:17.835 "method": "bdev_nvme_attach_controller" 00:27:17.835 }' 00:27:17.835 [2024-07-15 19:33:28.534773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:17.835 [2024-07-15 19:33:28.564596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.835 [2024-07-15 19:33:28.605682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.732 Running I/O for 10 seconds... 00:27:19.732 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.732 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:19.732 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:19.732 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.732 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:19.991 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.249 19:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.249 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.249 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:20.249 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:20.249 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1736181 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1736181 ']' 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1736181 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.507 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736181 00:27:20.764 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:20.764 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:20.764 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736181' 00:27:20.764 killing process with pid 1736181 00:27:20.764 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1736181 00:27:20.764 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1736181 00:27:20.764 Received shutdown signal, test time was about 0.902096 seconds 00:27:20.764 00:27:20.764 Latency(us) 00:27:20.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.764 Verification LBA range: start 0x0 length 0x400 00:27:20.764 Nvme1n1 : 0.90 285.75 17.86 0.00 0.00 221633.89 19945.74 201508.95 00:27:20.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.764 Verification LBA range: start 0x0 length 0x400 00:27:20.764 Nvme2n1 : 0.88 290.69 18.17 0.00 0.00 213297.86 16070.57 215186.03 00:27:20.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.764 Verification LBA range: start 0x0 length 0x400 00:27:20.764 Nvme3n1 : 0.89 286.73 17.92 0.00 0.00 212902.29 18008.15 217921.45 00:27:20.764 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.764 Verification LBA range: start 0x0 length 0x400 00:27:20.764 Nvme4n1 : 0.87 297.52 18.59 0.00 0.00 200761.65 3105.84 215186.03 00:27:20.764 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.764 Verification LBA range: start 0x0 length 0x400 00:27:20.764 Nvme5n1 : 0.90 284.78 17.80 0.00 0.00 206512.75 19261.89 217921.45 00:27:20.764 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.765 Verification LBA range: start 0x0 length 0x400 00:27:20.765 Nvme6n1 : 0.87 221.12 13.82 0.00 0.00 260013.49 24048.86 228863.11 00:27:20.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.765 Verification LBA range: start 0x0 length 0x400 00:27:20.765 Nvme7n1 : 0.88 299.82 18.74 0.00 0.00 187304.31 4673.00 199685.34 00:27:20.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.765 Verification LBA range: start 0x0 length 0x400 00:27:20.765 Nvme8n1 : 0.89 288.86 18.05 0.00 0.00 191564.24 14531.90 219745.06 00:27:20.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.765 Verification LBA range: start 0x0 length 0x400 00:27:20.765 Nvme9n1 : 0.86 222.26 13.89 0.00 0.00 242743.21 18236.10 242540.19 00:27:20.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:20.765 Verification LBA range: start 0x0 length 0x400 00:27:20.765 Nvme10n1 : 0.90 283.99 17.75 0.00 0.00 187398.46 17438.27 217921.45 00:27:20.765 =================================================================================================================== 00:27:20.765 Total : 2761.53 172.60 0.00 0.00 210261.83 3105.84 242540.19 00:27:21.022 19:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:21.955 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1736120 00:27:21.955 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.956 rmmod nvme_tcp 00:27:21.956 rmmod nvme_fabrics 00:27:21.956 rmmod nvme_keyring 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1736120 ']' 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1736120 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1736120 ']' 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1736120 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736120 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736120' 00:27:21.956 killing process with pid 1736120 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1736120 00:27:21.956 19:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1736120 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.523 19:33:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.421 00:27:24.421 real 0m7.853s 00:27:24.421 user 0m24.189s 00:27:24.421 sys 0m1.314s 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.421 ************************************ 00:27:24.421 END TEST nvmf_shutdown_tc2 00:27:24.421 ************************************ 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:24.421 ************************************ 00:27:24.421 START TEST nvmf_shutdown_tc3 00:27:24.421 ************************************ 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.421 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.680 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.680 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.680 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:27:24.680 00:27:24.680 --- 10.0.0.2 ping statistics --- 00:27:24.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.681 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:27:24.681 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:27:24.939 00:27:24.939 --- 10.0.0.1 ping statistics --- 00:27:24.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.939 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1737446 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1737446 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1737446 ']' 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.939 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 [2024-07-15 19:33:35.625835] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:24.939 [2024-07-15 19:33:35.625880] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.939 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.939 [2024-07-15 19:33:35.656088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:24.939 [2024-07-15 19:33:35.684394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.939 [2024-07-15 19:33:35.725629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.939 [2024-07-15 19:33:35.725669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.939 [2024-07-15 19:33:35.725676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.939 [2024-07-15 19:33:35.725682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.939 [2024-07-15 19:33:35.725691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.939 [2024-07-15 19:33:35.725792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.939 [2024-07-15 19:33:35.725882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.939 [2024-07-15 19:33:35.725988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.939 [2024-07-15 19:33:35.725989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.198 [2024-07-15 19:33:35.867304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.198 19:33:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.198 Malloc1 00:27:25.198 [2024-07-15 19:33:35.962862] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.198 Malloc2 00:27:25.198 Malloc3 00:27:25.455 Malloc4 00:27:25.455 Malloc5 00:27:25.455 Malloc6 00:27:25.455 Malloc7 00:27:25.455 Malloc8 00:27:25.455 Malloc9 00:27:25.713 Malloc10 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1737714 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1737714 /var/tmp/bdevperf.sock 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1737714 ']' 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:25.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.713 { 00:27:25.713 "params": { 00:27:25.713 "name": "Nvme$subsystem", 00:27:25.713 "trtype": "$TEST_TRANSPORT", 00:27:25.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.713 "adrfam": "ipv4", 00:27:25.713 "trsvcid": "$NVMF_PORT", 00:27:25.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.713 "hdgst": ${hdgst:-false}, 00:27:25.713 "ddgst": ${ddgst:-false} 00:27:25.713 }, 00:27:25.713 "method": "bdev_nvme_attach_controller" 00:27:25.713 } 00:27:25.713 EOF 00:27:25.713 )") 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.713 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.713 { 00:27:25.713 "params": { 00:27:25.713 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 [2024-07-15 19:33:36.429716] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:25.714 [2024-07-15 19:33:36.429765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737714 ] 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:25.714 { 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme$subsystem", 00:27:25.714 "trtype": "$TEST_TRANSPORT", 00:27:25.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "$NVMF_PORT", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.714 "hdgst": ${hdgst:-false}, 00:27:25.714 "ddgst": ${ddgst:-false} 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 } 00:27:25.714 EOF 00:27:25.714 )") 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:25.714 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:25.714 19:33:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme1", 00:27:25.714 "trtype": "tcp", 00:27:25.714 "traddr": "10.0.0.2", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "4420", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.714 "hdgst": false, 00:27:25.714 "ddgst": false 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 },{ 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme2", 00:27:25.714 "trtype": "tcp", 00:27:25.714 "traddr": "10.0.0.2", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "4420", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:25.714 "hdgst": false, 00:27:25.714 "ddgst": false 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 },{ 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme3", 00:27:25.714 "trtype": "tcp", 00:27:25.714 "traddr": "10.0.0.2", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "4420", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:25.714 "hdgst": false, 00:27:25.714 "ddgst": false 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 },{ 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme4", 00:27:25.714 "trtype": "tcp", 00:27:25.714 "traddr": "10.0.0.2", 00:27:25.714 "adrfam": "ipv4", 00:27:25.714 "trsvcid": "4420", 00:27:25.714 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:25.714 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:25.714 "hdgst": false, 00:27:25.714 "ddgst": false 00:27:25.714 }, 00:27:25.714 "method": "bdev_nvme_attach_controller" 00:27:25.714 },{ 00:27:25.714 "params": { 00:27:25.714 "name": "Nvme5", 00:27:25.714 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 },{ 00:27:25.715 "params": { 00:27:25.715 "name": "Nvme6", 00:27:25.715 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 },{ 00:27:25.715 "params": { 00:27:25.715 "name": "Nvme7", 00:27:25.715 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 },{ 00:27:25.715 "params": { 00:27:25.715 "name": "Nvme8", 00:27:25.715 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 },{ 00:27:25.715 "params": { 00:27:25.715 "name": "Nvme9", 00:27:25.715 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 },{ 00:27:25.715 "params": { 00:27:25.715 "name": "Nvme10", 00:27:25.715 "trtype": "tcp", 00:27:25.715 "traddr": "10.0.0.2", 00:27:25.715 "adrfam": "ipv4", 00:27:25.715 "trsvcid": "4420", 00:27:25.715 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:25.715 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:25.715 "hdgst": false, 00:27:25.715 "ddgst": false 00:27:25.715 }, 00:27:25.715 "method": "bdev_nvme_attach_controller" 00:27:25.715 }' 00:27:25.715 [2024-07-15 19:33:36.456721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:25.715 [2024-07-15 19:33:36.485855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.715 [2024-07-15 19:33:36.525577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.649 Running I/O for 10 seconds... 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:27.649 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:27.907 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.164 19:33:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:28.164 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1737446 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1737446 ']' 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1737446 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737446 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737446' 00:27:28.438 killing process with pid 1737446 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1737446 00:27:28.438 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1737446 00:27:28.438 [2024-07-15 19:33:39.093771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.438 [2024-07-15 19:33:39.093979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.093985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.093993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.093999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.094236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147bfd0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.095351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f6c0 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096387] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.439 [2024-07-15 19:33:39.096421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.096541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ab700 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.097996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.098164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12abba0 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.440 [2024-07-15 19:33:39.100162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.100378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac060 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.441 [2024-07-15 19:33:39.101563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.101608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac500 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.103221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac870 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.103244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac870 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.103251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac870 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.104484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12acd10 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.442 [2024-07-15 19:33:39.105306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.105670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1d0 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.443 [2024-07-15 19:33:39.106465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.106648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad670 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ae50 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b950 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4750 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea470 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4a70 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13df610 is same with the state(5) to be set 00:27:28.444 [2024-07-15 19:33:39.116816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.444 [2024-07-15 19:33:39.116837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.444 [2024-07-15 19:33:39.116845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab30e0 is same with the state(5) to be set 00:27:28.445 [2024-07-15 19:33:39.116899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab5fd0 is same with the state(5) to be set 00:27:28.445 [2024-07-15 19:33:39.116978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.116987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.116994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7c820 is same with the state(5) to be set 00:27:28.445 [2024-07-15 19:33:39.117058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.445 [2024-07-15 19:33:39.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abdae0 is same with the state(5) to be set 00:27:28.445 [2024-07-15 19:33:39.117330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.445 [2024-07-15 19:33:39.117724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.445 [2024-07-15 19:33:39.117731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.117986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.117994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.446 [2024-07-15 19:33:39.118347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.446 [2024-07-15 19:33:39.118413] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19964b0 was disconnected and freed. reset controller. 00:27:28.446 [2024-07-15 19:33:39.118811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.118985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.118993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.447 [2024-07-15 19:33:39.119325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.447 [2024-07-15 19:33:39.119332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.119829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.119853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:28.448 [2024-07-15 19:33:39.119904] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e3a40 was disconnected and freed. reset controller. 00:27:28.448 [2024-07-15 19:33:39.120115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.120130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.120142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.120150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.120162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.120170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.120179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.120186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.120195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.448 [2024-07-15 19:33:39.120201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.448 [2024-07-15 19:33:39.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.120608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.120615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.449 [2024-07-15 19:33:39.125966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.449 [2024-07-15 19:33:39.125973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.125981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.125990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.125998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.126300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.126306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.129424] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a6c0e0 was disconnected and freed. reset controller. 00:27:28.450 [2024-07-15 19:33:39.129497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7ae50 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b950 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab4750 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ea470 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d4a70 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13df610 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab30e0 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab5fd0 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7c820 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.129628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abdae0 (9): Bad file descriptor 00:27:28.450 [2024-07-15 19:33:39.133807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:28.450 [2024-07-15 19:33:39.133859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:28.450 [2024-07-15 19:33:39.133873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:28.450 [2024-07-15 19:33:39.135290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.450 [2024-07-15 19:33:39.135322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d4a70 with addr=10.0.0.2, port=4420 00:27:28.450 [2024-07-15 19:33:39.135334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4a70 is same with the state(5) to be set 00:27:28.450 [2024-07-15 19:33:39.135478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.450 [2024-07-15 19:33:39.135493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13df610 with addr=10.0.0.2, port=4420 00:27:28.450 [2024-07-15 19:33:39.135503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13df610 is same with the state(5) to be set 00:27:28.450 [2024-07-15 19:33:39.135748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.450 [2024-07-15 19:33:39.135762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9b950 with addr=10.0.0.2, port=4420 00:27:28.450 [2024-07-15 19:33:39.135773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b950 is same with the state(5) to be set 00:27:28.450 [2024-07-15 19:33:39.135827] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.450 [2024-07-15 19:33:39.135880] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.450 [2024-07-15 19:33:39.135931] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.450 [2024-07-15 19:33:39.135998] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.450 [2024-07-15 19:33:39.136077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:28.450 [2024-07-15 19:33:39.136123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.136137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.136157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.136169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.136182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.136192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.136206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.136216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.136245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.450 [2024-07-15 19:33:39.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.450 [2024-07-15 19:33:39.136267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.451 [2024-07-15 19:33:39.136929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.451 [2024-07-15 19:33:39.136939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.136951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.136960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.136972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.136982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.136994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6ac70 is same with the state(5) to be set 00:27:28.452 [2024-07-15 19:33:39.137625] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a6ac70 was disconnected and freed. reset controller. 00:27:28.452 [2024-07-15 19:33:39.137665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d4a70 (9): Bad file descriptor 00:27:28.452 [2024-07-15 19:33:39.137680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13df610 (9): Bad file descriptor 00:27:28.452 [2024-07-15 19:33:39.137693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b950 (9): Bad file descriptor 00:27:28.452 [2024-07-15 19:33:39.137803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.452 [2024-07-15 19:33:39.137929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.452 [2024-07-15 19:33:39.137941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.137951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.137971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.137983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.137992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.453 [2024-07-15 19:33:39.138345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.453 [2024-07-15 19:33:39.138358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.454 [2024-07-15 19:33:39.138869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.454 [2024-07-15 19:33:39.138879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.138899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.138923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.138944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.138966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.138986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.138998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.139175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.139185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e4eb0 is same with the state(5) to be set 00:27:28.455 [2024-07-15 19:33:39.139250] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e4eb0 was disconnected and freed. reset controller. 00:27:28.455 [2024-07-15 19:33:39.140535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:28.455 [2024-07-15 19:33:39.140566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:28.455 [2024-07-15 19:33:39.140577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:28.455 [2024-07-15 19:33:39.140587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:28.455 [2024-07-15 19:33:39.140603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:28.455 [2024-07-15 19:33:39.140612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:28.455 [2024-07-15 19:33:39.140620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:28.455 [2024-07-15 19:33:39.140635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:28.455 [2024-07-15 19:33:39.140643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:28.455 [2024-07-15 19:33:39.140653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:28.455 [2024-07-15 19:33:39.140707] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.455 [2024-07-15 19:33:39.141863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.455 [2024-07-15 19:33:39.141876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.455 [2024-07-15 19:33:39.141883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.455 [2024-07-15 19:33:39.142174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.455 [2024-07-15 19:33:39.142189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab4750 with addr=10.0.0.2, port=4420 00:27:28.455 [2024-07-15 19:33:39.142197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4750 is same with the state(5) to be set 00:27:28.455 [2024-07-15 19:33:39.142269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.142280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.142293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.142310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.142317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.142327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.455 [2024-07-15 19:33:39.142335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.455 [2024-07-15 19:33:39.142343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.456 [2024-07-15 19:33:39.142800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.456 [2024-07-15 19:33:39.142806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.142985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.142993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.457 [2024-07-15 19:33:39.143213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.457 [2024-07-15 19:33:39.143222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.143234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.143242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.143250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.143258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.143265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.143274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.143280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.143291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.143299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.143306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995050 is same with the state(5) to be set 00:27:28.458 [2024-07-15 19:33:39.144309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.458 [2024-07-15 19:33:39.144528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.458 [2024-07-15 19:33:39.144535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.459 [2024-07-15 19:33:39.144968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.459 [2024-07-15 19:33:39.144977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.144984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.144993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.145334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.145341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5470 is same with the state(5) to be set 00:27:28.460 [2024-07-15 19:33:39.146337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.146351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.146363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.146372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.146381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.460 [2024-07-15 19:33:39.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.460 [2024-07-15 19:33:39.146396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.461 [2024-07-15 19:33:39.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.461 [2024-07-15 19:33:39.146806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.146992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.146999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.462 [2024-07-15 19:33:39.147348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.462 [2024-07-15 19:33:39.147356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.147365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.147373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.147382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6900 is same with the state(5) to be set 00:27:28.463 [2024-07-15 19:33:39.148398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.463 [2024-07-15 19:33:39.148783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.463 [2024-07-15 19:33:39.148791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.148986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.148995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.464 [2024-07-15 19:33:39.149133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.464 [2024-07-15 19:33:39.149142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.149442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.149450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1997920 is same with the state(5) to be set 00:27:28.465 [2024-07-15 19:33:39.150477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.465 [2024-07-15 19:33:39.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.465 [2024-07-15 19:33:39.150621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.150990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.150997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.151007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.151016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.151025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.151032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.151042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.466 [2024-07-15 19:33:39.151049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.466 [2024-07-15 19:33:39.151057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.467 [2024-07-15 19:33:39.151506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.467 [2024-07-15 19:33:39.151514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.468 [2024-07-15 19:33:39.151522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.468 [2024-07-15 19:33:39.151529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6360 is same with the state(5) to be set 00:27:28.468 [2024-07-15 19:33:39.152993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:28.468 [2024-07-15 19:33:39.153018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:28.468 [2024-07-15 19:33:39.153028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:28.468 [2024-07-15 19:33:39.153037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:28.468 [2024-07-15 19:33:39.153075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab4750 (9): Bad file descriptor 00:27:28.468 [2024-07-15 19:33:39.153114] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.468 [2024-07-15 19:33:39.153132] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.468 [2024-07-15 19:33:39.153144] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.468 [2024-07-15 19:33:39.153419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:28.468 task offset: 32768 on job bdev=Nvme4n1 fails 00:27:28.468 00:27:28.468 Latency(us) 00:27:28.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.468 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme1n1 ended in about 0.95 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme1n1 : 0.95 206.44 12.90 67.41 0.00 231499.23 16754.42 217009.64 00:27:28.468 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme2n1 ended in about 0.95 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme2n1 : 0.95 201.80 12.61 67.27 0.00 231733.87 17096.35 215186.03 00:27:28.468 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme3n1 ended in about 0.95 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme3n1 : 0.95 201.37 12.59 67.12 0.00 228287.89 17210.32 217921.45 00:27:28.468 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme4n1 ended in about 0.94 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme4n1 : 0.94 273.53 17.10 68.38 0.00 175891.32 15500.69 214274.23 00:27:28.468 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme5n1 ended in about 0.96 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme5n1 : 0.96 200.93 12.56 66.98 0.00 220898.39 15614.66 218833.25 00:27:28.468 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme6n1 ended in about 0.94 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme6n1 : 0.94 204.90 12.81 68.30 0.00 212351.89 12309.37 225215.89 00:27:28.468 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme7n1 ended in about 0.95 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme7n1 : 0.95 210.15 13.13 67.59 0.00 205286.62 15500.69 198773.54 00:27:28.468 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme8n1 ended in about 0.96 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme8n1 : 0.96 200.50 12.53 66.83 0.00 209614.14 14816.83 213362.42 00:27:28.468 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme9n1 ended in about 0.95 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme9n1 : 0.95 203.07 12.69 67.69 0.00 202599.74 18122.13 217009.64 00:27:28.468 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:28.468 Job: Nvme10n1 ended in about 0.94 seconds with error 00:27:28.468 Verification LBA range: start 0x0 length 0x400 00:27:28.468 Nvme10n1 : 0.94 204.61 12.79 68.20 0.00 196873.35 16412.49 238892.97 00:27:28.468 =================================================================================================================== 00:27:28.468 Total : 2107.31 131.71 675.78 0.00 210652.51 12309.37 238892.97 00:27:28.468 [2024-07-15 19:33:39.174622] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:28.468 [2024-07-15 19:33:39.174660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:28.468 [2024-07-15 19:33:39.174990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.468 [2024-07-15 19:33:39.175009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab30e0 with addr=10.0.0.2, port=4420 00:27:28.468 [2024-07-15 19:33:39.175018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab30e0 is same with the state(5) to be set 00:27:28.468 [2024-07-15 19:33:39.175203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.468 [2024-07-15 19:33:39.175215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ea470 with addr=10.0.0.2, port=4420 00:27:28.468 [2024-07-15 19:33:39.175223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ea470 is same with the state(5) to be set 00:27:28.468 [2024-07-15 19:33:39.175434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.468 [2024-07-15 19:33:39.175446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab5fd0 with addr=10.0.0.2, port=4420 00:27:28.468 [2024-07-15 19:33:39.175454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab5fd0 is same with the state(5) to be set 00:27:28.468 [2024-07-15 19:33:39.175722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.468 [2024-07-15 19:33:39.175734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abdae0 with addr=10.0.0.2, port=4420 00:27:28.468 [2024-07-15 19:33:39.175741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abdae0 is same with the state(5) to be set 00:27:28.468 [2024-07-15 19:33:39.175749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:28.468 [2024-07-15 19:33:39.175756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:28.468 [2024-07-15 19:33:39.175769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:28.468 [2024-07-15 19:33:39.176898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:28.468 [2024-07-15 19:33:39.176915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:28.468 [2024-07-15 19:33:39.176924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:28.468 [2024-07-15 19:33:39.176933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.468 [2024-07-15 19:33:39.177257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.468 [2024-07-15 19:33:39.177271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7ae50 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.177278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ae50 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.177532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.469 [2024-07-15 19:33:39.177544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7c820 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.177551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7c820 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.177563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab30e0 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.177574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ea470 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.177583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab5fd0 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.177592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abdae0 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.177628] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.469 [2024-07-15 19:33:39.177640] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.469 [2024-07-15 19:33:39.177654] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.469 [2024-07-15 19:33:39.177664] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:28.469 [2024-07-15 19:33:39.177962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.469 [2024-07-15 19:33:39.177974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9b950 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.177981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9b950 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.178237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.469 [2024-07-15 19:33:39.178248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13df610 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.178255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13df610 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.178502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.469 [2024-07-15 19:33:39.178515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d4a70 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.178521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4a70 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.178533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7ae50 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.178543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7c820 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.178554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:28.469 [2024-07-15 19:33:39.178708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.178715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.178720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.178726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.178739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9b950 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.178749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13df610 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.178758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d4a70 (9): Bad file descriptor 00:27:28.469 [2024-07-15 19:33:39.178766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.178796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.178802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:28.469 [2024-07-15 19:33:39.178830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.178837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.179092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.469 [2024-07-15 19:33:39.179104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab4750 with addr=10.0.0.2, port=4420 00:27:28.469 [2024-07-15 19:33:39.179111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4750 is same with the state(5) to be set 00:27:28.469 [2024-07-15 19:33:39.179120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.179127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.179134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:28.469 [2024-07-15 19:33:39.179143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.179149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.179155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:28.469 [2024-07-15 19:33:39.179162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:28.469 [2024-07-15 19:33:39.179169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:28.469 [2024-07-15 19:33:39.179175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:28.469 [2024-07-15 19:33:39.179199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.179206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.179212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.469 [2024-07-15 19:33:39.179220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab4750 (9): Bad file descriptor 00:27:28.470 [2024-07-15 19:33:39.179245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:28.470 [2024-07-15 19:33:39.179252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:28.470 [2024-07-15 19:33:39.179259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:28.470 [2024-07-15 19:33:39.179281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.728 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:28.728 19:33:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1737714 00:27:29.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1737714) - No such process 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.684 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.684 rmmod nvme_tcp 00:27:29.943 rmmod nvme_fabrics 00:27:29.943 rmmod nvme_keyring 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.943 19:33:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.841 00:27:31.841 real 0m7.364s 00:27:31.841 user 0m17.732s 00:27:31.841 sys 0m1.273s 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:31.841 ************************************ 00:27:31.841 END TEST nvmf_shutdown_tc3 00:27:31.841 ************************************ 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:31.841 00:27:31.841 real 0m30.389s 00:27:31.841 user 1m15.712s 00:27:31.841 sys 0m8.273s 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.841 19:33:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.841 ************************************ 00:27:31.841 END TEST nvmf_shutdown 00:27:31.842 ************************************ 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:32.100 19:33:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.100 19:33:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.100 19:33:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:32.100 19:33:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.100 19:33:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.100 ************************************ 00:27:32.100 START TEST nvmf_multicontroller 00:27:32.100 ************************************ 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:32.100 * Looking for test storage... 00:27:32.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:32.100 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.101 19:33:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:37.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.382 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:37.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:37.383 Found net devices under 0000:86:00.0: cvl_0_0 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:37.383 Found net devices under 0000:86:00.1: cvl_0_1 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:37.383 00:27:37.383 --- 10.0.0.2 ping statistics --- 00:27:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.383 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:27:37.383 00:27:37.383 --- 10.0.0.1 ping statistics --- 00:27:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.383 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.383 19:33:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1741753 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1741753 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1741753 ']' 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.383 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.383 [2024-07-15 19:33:48.071877] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:37.383 [2024-07-15 19:33:48.071922] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.383 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.383 [2024-07-15 19:33:48.101041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:37.383 [2024-07-15 19:33:48.129132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:37.383 [2024-07-15 19:33:48.170251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.383 [2024-07-15 19:33:48.170290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.383 [2024-07-15 19:33:48.170298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.383 [2024-07-15 19:33:48.170304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.383 [2024-07-15 19:33:48.170310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.383 [2024-07-15 19:33:48.170411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.383 [2024-07-15 19:33:48.170495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.383 [2024-07-15 19:33:48.170497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 [2024-07-15 19:33:48.295359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 Malloc0 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.642 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 [2024-07-15 19:33:48.359293] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 [2024-07-15 19:33:48.367247] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 Malloc1 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1741779 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1741779 /var/tmp/bdevperf.sock 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1741779 ']' 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.643 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 NVMe0n1 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.901 1 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.901 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 request: 00:27:37.901 { 00:27:37.901 "name": "NVMe0", 00:27:37.901 "trtype": "tcp", 00:27:37.901 "traddr": "10.0.0.2", 00:27:37.901 "adrfam": "ipv4", 00:27:37.901 "trsvcid": "4420", 00:27:37.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.901 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:37.901 "hostaddr": "10.0.0.2", 00:27:37.901 "hostsvcid": "60000", 00:27:37.901 "prchk_reftag": false, 00:27:37.901 "prchk_guard": false, 00:27:38.159 "hdgst": false, 00:27:38.159 "ddgst": false, 00:27:38.159 "method": "bdev_nvme_attach_controller", 00:27:38.159 "req_id": 1 00:27:38.159 } 00:27:38.159 Got JSON-RPC error response 00:27:38.159 response: 00:27:38.159 { 00:27:38.159 "code": -114, 00:27:38.159 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:38.159 } 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.159 request: 00:27:38.159 { 00:27:38.159 "name": "NVMe0", 00:27:38.159 "trtype": "tcp", 00:27:38.159 "traddr": "10.0.0.2", 00:27:38.159 "adrfam": "ipv4", 00:27:38.159 "trsvcid": "4420", 00:27:38.159 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.159 "hostaddr": "10.0.0.2", 00:27:38.159 "hostsvcid": "60000", 00:27:38.159 "prchk_reftag": false, 00:27:38.159 "prchk_guard": false, 00:27:38.159 "hdgst": false, 00:27:38.159 "ddgst": false, 00:27:38.159 "method": "bdev_nvme_attach_controller", 00:27:38.159 "req_id": 1 00:27:38.159 } 00:27:38.159 Got JSON-RPC error response 00:27:38.159 response: 00:27:38.159 { 00:27:38.159 "code": -114, 00:27:38.159 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:38.159 } 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.159 request: 00:27:38.159 { 00:27:38.159 "name": "NVMe0", 00:27:38.159 "trtype": "tcp", 00:27:38.159 "traddr": "10.0.0.2", 00:27:38.159 "adrfam": "ipv4", 00:27:38.159 "trsvcid": "4420", 00:27:38.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.159 "hostaddr": "10.0.0.2", 00:27:38.159 "hostsvcid": "60000", 00:27:38.159 "prchk_reftag": false, 00:27:38.159 "prchk_guard": false, 00:27:38.159 "hdgst": false, 00:27:38.159 "ddgst": false, 00:27:38.159 "multipath": "disable", 00:27:38.159 "method": "bdev_nvme_attach_controller", 00:27:38.159 "req_id": 1 00:27:38.159 } 00:27:38.159 Got JSON-RPC error response 00:27:38.159 response: 00:27:38.159 { 00:27:38.159 "code": -114, 00:27:38.159 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:38.159 } 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.159 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.159 request: 00:27:38.159 { 00:27:38.159 "name": "NVMe0", 00:27:38.159 "trtype": "tcp", 00:27:38.159 "traddr": "10.0.0.2", 00:27:38.159 "adrfam": "ipv4", 00:27:38.159 "trsvcid": "4420", 00:27:38.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.159 "hostaddr": "10.0.0.2", 00:27:38.159 "hostsvcid": "60000", 00:27:38.159 "prchk_reftag": false, 00:27:38.160 "prchk_guard": false, 00:27:38.160 "hdgst": false, 00:27:38.160 "ddgst": false, 00:27:38.160 "multipath": "failover", 00:27:38.160 "method": "bdev_nvme_attach_controller", 00:27:38.160 "req_id": 1 00:27:38.160 } 00:27:38.160 Got JSON-RPC error response 00:27:38.160 response: 00:27:38.160 { 00:27:38.160 "code": -114, 00:27:38.160 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:38.160 } 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.160 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.160 19:33:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.417 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:38.417 19:33:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:39.349 0 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1741779 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1741779 ']' 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1741779 00:27:39.349 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1741779 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1741779' 00:27:39.607 killing process with pid 1741779 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1741779 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1741779 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:39.607 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:39.608 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:39.608 [2024-07-15 19:33:48.466629] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:39.608 [2024-07-15 19:33:48.466677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741779 ] 00:27:39.608 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.608 [2024-07-15 19:33:48.494833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:39.608 [2024-07-15 19:33:48.519822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.608 [2024-07-15 19:33:48.561584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.608 [2024-07-15 19:33:49.043705] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name cbf10ef6-1d5f-4b49-a5da-c947c97ddd31 already exists 00:27:39.608 [2024-07-15 19:33:49.043734] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:cbf10ef6-1d5f-4b49-a5da-c947c97ddd31 alias for bdev NVMe1n1 00:27:39.608 [2024-07-15 19:33:49.043741] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:39.608 Running I/O for 1 seconds... 00:27:39.608 00:27:39.608 Latency(us) 00:27:39.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.608 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:39.608 NVMe0n1 : 1.01 23347.36 91.20 0.00 0.00 5464.82 1567.17 6867.03 00:27:39.608 =================================================================================================================== 00:27:39.608 Total : 23347.36 91.20 0.00 0.00 5464.82 1567.17 6867.03 00:27:39.608 Received shutdown signal, test time was about 1.000000 seconds 00:27:39.608 00:27:39.608 Latency(us) 00:27:39.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.608 =================================================================================================================== 00:27:39.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.608 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.608 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.867 rmmod nvme_tcp 00:27:39.867 rmmod nvme_fabrics 00:27:39.867 rmmod nvme_keyring 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1741753 ']' 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1741753 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1741753 ']' 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1741753 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1741753 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1741753' 00:27:39.867 killing process with pid 1741753 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1741753 00:27:39.867 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1741753 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.126 19:33:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.027 19:33:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.027 00:27:42.027 real 0m10.043s 00:27:42.027 user 0m11.133s 00:27:42.027 sys 0m4.594s 00:27:42.027 19:33:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.027 19:33:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:42.027 ************************************ 00:27:42.027 END TEST nvmf_multicontroller 00:27:42.027 ************************************ 00:27:42.027 19:33:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:42.027 19:33:52 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:42.027 19:33:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:42.027 19:33:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.027 19:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:42.285 ************************************ 00:27:42.285 START TEST nvmf_aer 00:27:42.285 ************************************ 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:42.285 * Looking for test storage... 00:27:42.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.285 19:33:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.285 19:33:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.285 19:33:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:42.285 19:33:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:42.285 19:33:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:42.285 19:33:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.619 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:47.620 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:47.620 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:47.620 Found net devices under 0000:86:00.0: cvl_0_0 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:47.620 Found net devices under 0000:86:00.1: cvl_0_1 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:27:47.620 00:27:47.620 --- 10.0.0.2 ping statistics --- 00:27:47.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.620 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:47.620 00:27:47.620 --- 10.0.0.1 ping statistics --- 00:27:47.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.620 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1745537 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1745537 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1745537 ']' 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.620 19:33:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.620 [2024-07-15 19:33:57.997471] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:47.620 [2024-07-15 19:33:57.997515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.620 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.620 [2024-07-15 19:33:58.026795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:47.620 [2024-07-15 19:33:58.054268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.620 [2024-07-15 19:33:58.096653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.620 [2024-07-15 19:33:58.096691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.620 [2024-07-15 19:33:58.096698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.620 [2024-07-15 19:33:58.096704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.620 [2024-07-15 19:33:58.096709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.620 [2024-07-15 19:33:58.096753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.620 [2024-07-15 19:33:58.096838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.620 [2024-07-15 19:33:58.096929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.620 [2024-07-15 19:33:58.096930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.620 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.620 [2024-07-15 19:33:58.235141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 Malloc0 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 [2024-07-15 19:33:58.286855] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.621 [ 00:27:47.621 { 00:27:47.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.621 "subtype": "Discovery", 00:27:47.621 "listen_addresses": [], 00:27:47.621 "allow_any_host": true, 00:27:47.621 "hosts": [] 00:27:47.621 }, 00:27:47.621 { 00:27:47.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.621 "subtype": "NVMe", 00:27:47.621 "listen_addresses": [ 00:27:47.621 { 00:27:47.621 "trtype": "TCP", 00:27:47.621 "adrfam": "IPv4", 00:27:47.621 "traddr": "10.0.0.2", 00:27:47.621 "trsvcid": "4420" 00:27:47.621 } 00:27:47.621 ], 00:27:47.621 "allow_any_host": true, 00:27:47.621 "hosts": [], 00:27:47.621 "serial_number": "SPDK00000000000001", 00:27:47.621 "model_number": "SPDK bdev Controller", 00:27:47.621 "max_namespaces": 2, 00:27:47.621 "min_cntlid": 1, 00:27:47.621 "max_cntlid": 65519, 00:27:47.621 "namespaces": [ 00:27:47.621 { 00:27:47.621 "nsid": 1, 00:27:47.621 "bdev_name": "Malloc0", 00:27:47.621 "name": "Malloc0", 00:27:47.621 "nguid": "3728B971D4804753A3C9E199214F8851", 00:27:47.621 "uuid": "3728b971-d480-4753-a3c9-e199214f8851" 00:27:47.621 } 00:27:47.621 ] 00:27:47.621 } 00:27:47.621 ] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1745560 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:47.621 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:47.621 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 Malloc1 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 Asynchronous Event Request test 00:27:47.879 Attaching to 10.0.0.2 00:27:47.879 Attached to 10.0.0.2 00:27:47.879 Registering asynchronous event callbacks... 00:27:47.879 Starting namespace attribute notice tests for all controllers... 00:27:47.879 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:47.879 aer_cb - Changed Namespace 00:27:47.879 Cleaning up... 00:27:47.879 [ 00:27:47.879 { 00:27:47.879 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.879 "subtype": "Discovery", 00:27:47.879 "listen_addresses": [], 00:27:47.879 "allow_any_host": true, 00:27:47.879 "hosts": [] 00:27:47.879 }, 00:27:47.879 { 00:27:47.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.879 "subtype": "NVMe", 00:27:47.879 "listen_addresses": [ 00:27:47.879 { 00:27:47.879 "trtype": "TCP", 00:27:47.879 "adrfam": "IPv4", 00:27:47.879 "traddr": "10.0.0.2", 00:27:47.879 "trsvcid": "4420" 00:27:47.879 } 00:27:47.879 ], 00:27:47.879 "allow_any_host": true, 00:27:47.879 "hosts": [], 00:27:47.879 "serial_number": "SPDK00000000000001", 00:27:47.879 "model_number": "SPDK bdev Controller", 00:27:47.879 "max_namespaces": 2, 00:27:47.879 "min_cntlid": 1, 00:27:47.879 "max_cntlid": 65519, 00:27:47.879 "namespaces": [ 00:27:47.879 { 00:27:47.879 "nsid": 1, 00:27:47.879 "bdev_name": "Malloc0", 00:27:47.879 "name": "Malloc0", 00:27:47.879 "nguid": "3728B971D4804753A3C9E199214F8851", 00:27:47.879 "uuid": "3728b971-d480-4753-a3c9-e199214f8851" 00:27:47.879 }, 00:27:47.879 { 00:27:47.879 "nsid": 2, 00:27:47.879 "bdev_name": "Malloc1", 00:27:47.879 "name": "Malloc1", 00:27:47.879 "nguid": "04C5E19272B24221B1897517284B1914", 00:27:47.879 "uuid": "04c5e192-72b2-4221-b189-7517284b1914" 00:27:47.879 } 00:27:47.879 ] 00:27:47.879 } 00:27:47.879 ] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1745560 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.879 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.880 rmmod nvme_tcp 00:27:47.880 rmmod nvme_fabrics 00:27:47.880 rmmod nvme_keyring 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1745537 ']' 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1745537 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1745537 ']' 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1745537 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.880 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745537 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745537' 00:27:48.138 killing process with pid 1745537 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1745537 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1745537 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.138 19:33:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.672 19:34:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.672 00:27:50.672 real 0m8.103s 00:27:50.672 user 0m4.516s 00:27:50.672 sys 0m4.076s 00:27:50.672 19:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.672 19:34:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:50.672 ************************************ 00:27:50.672 END TEST nvmf_aer 00:27:50.672 ************************************ 00:27:50.672 19:34:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:50.672 19:34:01 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.672 19:34:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.672 19:34:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.672 19:34:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.672 ************************************ 00:27:50.672 START TEST nvmf_async_init 00:27:50.672 ************************************ 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.672 * Looking for test storage... 00:27:50.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.672 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=de3415db4a814da29880836c402b6290 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.673 19:34:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:55.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:55.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:55.943 Found net devices under 0000:86:00.0: cvl_0_0 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:55.943 Found net devices under 0000:86:00.1: cvl_0_1 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:55.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:55.943 00:27:55.943 --- 10.0.0.2 ping statistics --- 00:27:55.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.943 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:27:55.943 00:27:55.943 --- 10.0.0.1 ping statistics --- 00:27:55.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.943 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:55.943 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1749069 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1749069 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1749069 ']' 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 [2024-07-15 19:34:06.500322] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:27:55.944 [2024-07-15 19:34:06.500365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.944 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.944 [2024-07-15 19:34:06.529269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:55.944 [2024-07-15 19:34:06.555986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.944 [2024-07-15 19:34:06.596418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.944 [2024-07-15 19:34:06.596455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.944 [2024-07-15 19:34:06.596462] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.944 [2024-07-15 19:34:06.596469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.944 [2024-07-15 19:34:06.596474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.944 [2024-07-15 19:34:06.596491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 [2024-07-15 19:34:06.729066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 null0 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g de3415db4a814da29880836c402b6290 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.944 [2024-07-15 19:34:06.773300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.944 19:34:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.202 nvme0n1 00:27:56.202 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.202 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:56.202 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.202 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.202 [ 00:27:56.202 { 00:27:56.202 "name": "nvme0n1", 00:27:56.202 "aliases": [ 00:27:56.202 "de3415db-4a81-4da2-9880-836c402b6290" 00:27:56.202 ], 00:27:56.202 "product_name": "NVMe disk", 00:27:56.202 "block_size": 512, 00:27:56.202 "num_blocks": 2097152, 00:27:56.202 "uuid": "de3415db-4a81-4da2-9880-836c402b6290", 00:27:56.202 "assigned_rate_limits": { 00:27:56.202 "rw_ios_per_sec": 0, 00:27:56.202 "rw_mbytes_per_sec": 0, 00:27:56.202 "r_mbytes_per_sec": 0, 00:27:56.202 "w_mbytes_per_sec": 0 00:27:56.202 }, 00:27:56.202 "claimed": false, 00:27:56.202 "zoned": false, 00:27:56.202 "supported_io_types": { 00:27:56.202 "read": true, 00:27:56.202 "write": true, 00:27:56.202 "unmap": false, 00:27:56.202 "flush": true, 00:27:56.202 "reset": true, 00:27:56.202 "nvme_admin": true, 00:27:56.202 "nvme_io": true, 00:27:56.202 "nvme_io_md": false, 00:27:56.202 "write_zeroes": true, 00:27:56.202 "zcopy": false, 00:27:56.202 "get_zone_info": false, 00:27:56.202 "zone_management": false, 00:27:56.202 "zone_append": false, 00:27:56.202 "compare": true, 00:27:56.202 "compare_and_write": true, 00:27:56.202 "abort": true, 00:27:56.202 "seek_hole": false, 00:27:56.202 "seek_data": false, 00:27:56.202 "copy": true, 00:27:56.202 "nvme_iov_md": false 00:27:56.202 }, 00:27:56.202 "memory_domains": [ 00:27:56.202 { 00:27:56.202 "dma_device_id": "system", 00:27:56.202 "dma_device_type": 1 00:27:56.202 } 00:27:56.202 ], 00:27:56.202 "driver_specific": { 00:27:56.202 "nvme": [ 00:27:56.202 { 00:27:56.202 "trid": { 00:27:56.202 "trtype": "TCP", 00:27:56.202 "adrfam": "IPv4", 00:27:56.202 "traddr": "10.0.0.2", 00:27:56.202 "trsvcid": "4420", 00:27:56.202 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:56.202 }, 00:27:56.203 "ctrlr_data": { 00:27:56.203 "cntlid": 1, 00:27:56.203 "vendor_id": "0x8086", 00:27:56.203 "model_number": "SPDK bdev Controller", 00:27:56.203 "serial_number": "00000000000000000000", 00:27:56.203 "firmware_revision": "24.09", 00:27:56.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.203 "oacs": { 00:27:56.203 "security": 0, 00:27:56.203 "format": 0, 00:27:56.203 "firmware": 0, 00:27:56.203 "ns_manage": 0 00:27:56.203 }, 00:27:56.203 "multi_ctrlr": true, 00:27:56.203 "ana_reporting": false 00:27:56.203 }, 00:27:56.203 "vs": { 00:27:56.203 "nvme_version": "1.3" 00:27:56.203 }, 00:27:56.203 "ns_data": { 00:27:56.203 "id": 1, 00:27:56.203 "can_share": true 00:27:56.203 } 00:27:56.203 } 00:27:56.203 ], 00:27:56.203 "mp_policy": "active_passive" 00:27:56.203 } 00:27:56.203 } 00:27:56.203 ] 00:27:56.203 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.203 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:56.203 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.203 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.203 [2024-07-15 19:34:07.034805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.203 [2024-07-15 19:34:07.034860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1246700 (9): Bad file descriptor 00:27:56.461 [2024-07-15 19:34:07.166305] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.461 [ 00:27:56.461 { 00:27:56.461 "name": "nvme0n1", 00:27:56.461 "aliases": [ 00:27:56.461 "de3415db-4a81-4da2-9880-836c402b6290" 00:27:56.461 ], 00:27:56.461 "product_name": "NVMe disk", 00:27:56.461 "block_size": 512, 00:27:56.461 "num_blocks": 2097152, 00:27:56.461 "uuid": "de3415db-4a81-4da2-9880-836c402b6290", 00:27:56.461 "assigned_rate_limits": { 00:27:56.461 "rw_ios_per_sec": 0, 00:27:56.461 "rw_mbytes_per_sec": 0, 00:27:56.461 "r_mbytes_per_sec": 0, 00:27:56.461 "w_mbytes_per_sec": 0 00:27:56.461 }, 00:27:56.461 "claimed": false, 00:27:56.461 "zoned": false, 00:27:56.461 "supported_io_types": { 00:27:56.461 "read": true, 00:27:56.461 "write": true, 00:27:56.461 "unmap": false, 00:27:56.461 "flush": true, 00:27:56.461 "reset": true, 00:27:56.461 "nvme_admin": true, 00:27:56.461 "nvme_io": true, 00:27:56.461 "nvme_io_md": false, 00:27:56.461 "write_zeroes": true, 00:27:56.461 "zcopy": false, 00:27:56.461 "get_zone_info": false, 00:27:56.461 "zone_management": false, 00:27:56.461 "zone_append": false, 00:27:56.461 "compare": true, 00:27:56.461 "compare_and_write": true, 00:27:56.461 "abort": true, 00:27:56.461 "seek_hole": false, 00:27:56.461 "seek_data": false, 00:27:56.461 "copy": true, 00:27:56.461 "nvme_iov_md": false 00:27:56.461 }, 00:27:56.461 "memory_domains": [ 00:27:56.461 { 00:27:56.461 "dma_device_id": "system", 00:27:56.461 "dma_device_type": 1 00:27:56.461 } 00:27:56.461 ], 00:27:56.461 "driver_specific": { 00:27:56.461 "nvme": [ 00:27:56.461 { 00:27:56.461 "trid": { 00:27:56.461 "trtype": "TCP", 00:27:56.461 "adrfam": "IPv4", 00:27:56.461 "traddr": "10.0.0.2", 00:27:56.461 "trsvcid": "4420", 00:27:56.461 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:56.461 }, 00:27:56.461 "ctrlr_data": { 00:27:56.461 "cntlid": 2, 00:27:56.461 "vendor_id": "0x8086", 00:27:56.461 "model_number": "SPDK bdev Controller", 00:27:56.461 "serial_number": "00000000000000000000", 00:27:56.461 "firmware_revision": "24.09", 00:27:56.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.461 "oacs": { 00:27:56.461 "security": 0, 00:27:56.461 "format": 0, 00:27:56.461 "firmware": 0, 00:27:56.461 "ns_manage": 0 00:27:56.461 }, 00:27:56.461 "multi_ctrlr": true, 00:27:56.461 "ana_reporting": false 00:27:56.461 }, 00:27:56.461 "vs": { 00:27:56.461 "nvme_version": "1.3" 00:27:56.461 }, 00:27:56.461 "ns_data": { 00:27:56.461 "id": 1, 00:27:56.461 "can_share": true 00:27:56.461 } 00:27:56.461 } 00:27:56.461 ], 00:27:56.461 "mp_policy": "active_passive" 00:27:56.461 } 00:27:56.461 } 00:27:56.461 ] 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.461 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7KDlHcTfju 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7KDlHcTfju 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.462 [2024-07-15 19:34:07.227385] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:56.462 [2024-07-15 19:34:07.227500] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7KDlHcTfju 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.462 [2024-07-15 19:34:07.235400] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7KDlHcTfju 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.462 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.462 [2024-07-15 19:34:07.247441] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:56.462 [2024-07-15 19:34:07.247475] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:56.720 nvme0n1 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.720 [ 00:27:56.720 { 00:27:56.720 "name": "nvme0n1", 00:27:56.720 "aliases": [ 00:27:56.720 "de3415db-4a81-4da2-9880-836c402b6290" 00:27:56.720 ], 00:27:56.720 "product_name": "NVMe disk", 00:27:56.720 "block_size": 512, 00:27:56.720 "num_blocks": 2097152, 00:27:56.720 "uuid": "de3415db-4a81-4da2-9880-836c402b6290", 00:27:56.720 "assigned_rate_limits": { 00:27:56.720 "rw_ios_per_sec": 0, 00:27:56.720 "rw_mbytes_per_sec": 0, 00:27:56.720 "r_mbytes_per_sec": 0, 00:27:56.720 "w_mbytes_per_sec": 0 00:27:56.720 }, 00:27:56.720 "claimed": false, 00:27:56.720 "zoned": false, 00:27:56.720 "supported_io_types": { 00:27:56.720 "read": true, 00:27:56.720 "write": true, 00:27:56.720 "unmap": false, 00:27:56.720 "flush": true, 00:27:56.720 "reset": true, 00:27:56.720 "nvme_admin": true, 00:27:56.720 "nvme_io": true, 00:27:56.720 "nvme_io_md": false, 00:27:56.720 "write_zeroes": true, 00:27:56.720 "zcopy": false, 00:27:56.720 "get_zone_info": false, 00:27:56.720 "zone_management": false, 00:27:56.720 "zone_append": false, 00:27:56.720 "compare": true, 00:27:56.720 "compare_and_write": true, 00:27:56.720 "abort": true, 00:27:56.720 "seek_hole": false, 00:27:56.720 "seek_data": false, 00:27:56.720 "copy": true, 00:27:56.720 "nvme_iov_md": false 00:27:56.720 }, 00:27:56.720 "memory_domains": [ 00:27:56.720 { 00:27:56.720 "dma_device_id": "system", 00:27:56.720 "dma_device_type": 1 00:27:56.720 } 00:27:56.720 ], 00:27:56.720 "driver_specific": { 00:27:56.720 "nvme": [ 00:27:56.720 { 00:27:56.720 "trid": { 00:27:56.720 "trtype": "TCP", 00:27:56.720 "adrfam": "IPv4", 00:27:56.720 "traddr": "10.0.0.2", 00:27:56.720 "trsvcid": "4421", 00:27:56.720 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:56.720 }, 00:27:56.720 "ctrlr_data": { 00:27:56.720 "cntlid": 3, 00:27:56.720 "vendor_id": "0x8086", 00:27:56.720 "model_number": "SPDK bdev Controller", 00:27:56.720 "serial_number": "00000000000000000000", 00:27:56.720 "firmware_revision": "24.09", 00:27:56.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.720 "oacs": { 00:27:56.720 "security": 0, 00:27:56.720 "format": 0, 00:27:56.720 "firmware": 0, 00:27:56.720 "ns_manage": 0 00:27:56.720 }, 00:27:56.720 "multi_ctrlr": true, 00:27:56.720 "ana_reporting": false 00:27:56.720 }, 00:27:56.720 "vs": { 00:27:56.720 "nvme_version": "1.3" 00:27:56.720 }, 00:27:56.720 "ns_data": { 00:27:56.720 "id": 1, 00:27:56.720 "can_share": true 00:27:56.720 } 00:27:56.720 } 00:27:56.720 ], 00:27:56.720 "mp_policy": "active_passive" 00:27:56.720 } 00:27:56.720 } 00:27:56.720 ] 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.720 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7KDlHcTfju 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.721 rmmod nvme_tcp 00:27:56.721 rmmod nvme_fabrics 00:27:56.721 rmmod nvme_keyring 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1749069 ']' 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1749069 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1749069 ']' 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1749069 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749069 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749069' 00:27:56.721 killing process with pid 1749069 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1749069 00:27:56.721 [2024-07-15 19:34:07.467141] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:56.721 [2024-07-15 19:34:07.467165] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:56.721 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1749069 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.979 19:34:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.881 19:34:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.881 00:27:58.881 real 0m8.621s 00:27:58.881 user 0m2.751s 00:27:58.881 sys 0m4.257s 00:27:58.881 19:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:58.881 19:34:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.881 ************************************ 00:27:58.881 END TEST nvmf_async_init 00:27:58.881 ************************************ 00:27:59.139 19:34:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:59.139 19:34:09 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:59.139 19:34:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:59.139 19:34:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.139 19:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.139 ************************************ 00:27:59.139 START TEST dma 00:27:59.139 ************************************ 00:27:59.139 19:34:09 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:59.139 * Looking for test storage... 00:27:59.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.139 19:34:09 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.139 19:34:09 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.139 19:34:09 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.139 19:34:09 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.139 19:34:09 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.139 19:34:09 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.139 19:34:09 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.139 19:34:09 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:59.139 19:34:09 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.139 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.140 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.140 19:34:09 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.140 19:34:09 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:59.140 19:34:09 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:59.140 00:27:59.140 real 0m0.113s 00:27:59.140 user 0m0.055s 00:27:59.140 sys 0m0.066s 00:27:59.140 19:34:09 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.140 19:34:09 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:59.140 ************************************ 00:27:59.140 END TEST dma 00:27:59.140 ************************************ 00:27:59.140 19:34:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:59.140 19:34:09 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:59.140 19:34:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:59.140 19:34:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.140 19:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.140 ************************************ 00:27:59.140 START TEST nvmf_identify 00:27:59.140 ************************************ 00:27:59.140 19:34:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:59.398 * Looking for test storage... 00:27:59.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.398 19:34:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:04.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:04.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:04.668 Found net devices under 0000:86:00.0: cvl_0_0 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:04.668 Found net devices under 0000:86:00.1: cvl_0_1 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.668 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:28:04.927 00:28:04.927 --- 10.0.0.2 ping statistics --- 00:28:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.927 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:04.927 00:28:04.927 --- 10.0.0.1 ping statistics --- 00:28:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.927 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1752769 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1752769 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1752769 ']' 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.927 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 [2024-07-15 19:34:15.701845] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:04.927 [2024-07-15 19:34:15.701886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.927 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.927 [2024-07-15 19:34:15.732778] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:04.927 [2024-07-15 19:34:15.761952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.186 [2024-07-15 19:34:15.804248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.186 [2024-07-15 19:34:15.804289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.186 [2024-07-15 19:34:15.804297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.186 [2024-07-15 19:34:15.804303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.186 [2024-07-15 19:34:15.804308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.186 [2024-07-15 19:34:15.804350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.186 [2024-07-15 19:34:15.804449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.186 [2024-07-15 19:34:15.804535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.186 [2024-07-15 19:34:15.804536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 [2024-07-15 19:34:15.916294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 Malloc0 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 [2024-07-15 19:34:16.000332] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.186 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.186 [ 00:28:05.186 { 00:28:05.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.186 "subtype": "Discovery", 00:28:05.186 "listen_addresses": [ 00:28:05.186 { 00:28:05.186 "trtype": "TCP", 00:28:05.186 "adrfam": "IPv4", 00:28:05.186 "traddr": "10.0.0.2", 00:28:05.186 "trsvcid": "4420" 00:28:05.186 } 00:28:05.186 ], 00:28:05.186 "allow_any_host": true, 00:28:05.186 "hosts": [] 00:28:05.186 }, 00:28:05.186 { 00:28:05.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.186 "subtype": "NVMe", 00:28:05.186 "listen_addresses": [ 00:28:05.186 { 00:28:05.186 "trtype": "TCP", 00:28:05.186 "adrfam": "IPv4", 00:28:05.186 "traddr": "10.0.0.2", 00:28:05.186 "trsvcid": "4420" 00:28:05.186 } 00:28:05.186 ], 00:28:05.186 "allow_any_host": true, 00:28:05.186 "hosts": [], 00:28:05.186 "serial_number": "SPDK00000000000001", 00:28:05.187 "model_number": "SPDK bdev Controller", 00:28:05.187 "max_namespaces": 32, 00:28:05.187 "min_cntlid": 1, 00:28:05.187 "max_cntlid": 65519, 00:28:05.187 "namespaces": [ 00:28:05.187 { 00:28:05.187 "nsid": 1, 00:28:05.187 "bdev_name": "Malloc0", 00:28:05.187 "name": "Malloc0", 00:28:05.187 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:05.187 "eui64": "ABCDEF0123456789", 00:28:05.187 "uuid": "f96d990f-a94d-48d6-bacf-a2776a18da8f" 00:28:05.187 } 00:28:05.187 ] 00:28:05.187 } 00:28:05.187 ] 00:28:05.187 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.187 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:05.447 [2024-07-15 19:34:16.051737] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:05.447 [2024-07-15 19:34:16.051783] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752898 ] 00:28:05.447 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.447 [2024-07-15 19:34:16.065741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:05.447 [2024-07-15 19:34:16.081776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:05.447 [2024-07-15 19:34:16.081823] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:05.447 [2024-07-15 19:34:16.081827] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:05.447 [2024-07-15 19:34:16.081837] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:05.447 [2024-07-15 19:34:16.081844] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:05.447 [2024-07-15 19:34:16.082203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:05.447 [2024-07-15 19:34:16.082235] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1424150 0 00:28:05.447 [2024-07-15 19:34:16.096234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:05.447 [2024-07-15 19:34:16.096244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:05.447 [2024-07-15 19:34:16.096248] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:05.447 [2024-07-15 19:34:16.096251] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:05.447 [2024-07-15 19:34:16.096285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.096291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.096294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.447 [2024-07-15 19:34:16.096306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:05.447 [2024-07-15 19:34:16.096322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.447 [2024-07-15 19:34:16.104235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.447 [2024-07-15 19:34:16.104244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.447 [2024-07-15 19:34:16.104248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.447 [2024-07-15 19:34:16.104261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:05.447 [2024-07-15 19:34:16.104267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:05.447 [2024-07-15 19:34:16.104272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:05.447 [2024-07-15 19:34:16.104285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.447 [2024-07-15 19:34:16.104300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.447 [2024-07-15 19:34:16.104312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.447 [2024-07-15 19:34:16.104518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.447 [2024-07-15 19:34:16.104524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.447 [2024-07-15 19:34:16.104528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.447 [2024-07-15 19:34:16.104536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:05.447 [2024-07-15 19:34:16.104547] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:05.447 [2024-07-15 19:34:16.104553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.447 [2024-07-15 19:34:16.104560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.104567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.104578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.104655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.104662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.104665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.104673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:05.448 [2024-07-15 19:34:16.104680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.104687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.104700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.104709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.104783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.104789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.104792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.104800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.104808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.104821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.104830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.104903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.104909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.104912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.104915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.104919] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:05.448 [2024-07-15 19:34:16.104924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.104930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.105037] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:05.448 [2024-07-15 19:34:16.105041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.105050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.105062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.105071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.105158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.105164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.105167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.105174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:05.448 [2024-07-15 19:34:16.105182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.105194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.105203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.105280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.105286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.105290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.105297] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:05.448 [2024-07-15 19:34:16.105301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:05.448 [2024-07-15 19:34:16.105307] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:05.448 [2024-07-15 19:34:16.105315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:05.448 [2024-07-15 19:34:16.105324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.105333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.448 [2024-07-15 19:34:16.105343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.105448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.448 [2024-07-15 19:34:16.105454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.448 [2024-07-15 19:34:16.105457] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105463] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1424150): datao=0, datal=4096, cccid=0 00:28:05.448 [2024-07-15 19:34:16.105467] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1490980) on tqpair(0x1424150): expected_datao=0, payload_size=4096 00:28:05.448 [2024-07-15 19:34:16.105471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.105501] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.146390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.146394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.146405] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:05.448 [2024-07-15 19:34:16.146413] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:05.448 [2024-07-15 19:34:16.146418] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:05.448 [2024-07-15 19:34:16.146422] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:05.448 [2024-07-15 19:34:16.146426] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:05.448 [2024-07-15 19:34:16.146431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:05.448 [2024-07-15 19:34:16.146440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:05.448 [2024-07-15 19:34:16.146446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.146461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.448 [2024-07-15 19:34:16.146473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.448 [2024-07-15 19:34:16.146552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.448 [2024-07-15 19:34:16.146558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.448 [2024-07-15 19:34:16.146561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.448 [2024-07-15 19:34:16.146571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.146583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.448 [2024-07-15 19:34:16.146589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.146601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.448 [2024-07-15 19:34:16.146607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.146621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.448 [2024-07-15 19:34:16.146626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.448 [2024-07-15 19:34:16.146633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.448 [2024-07-15 19:34:16.146638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.449 [2024-07-15 19:34:16.146643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:05.449 [2024-07-15 19:34:16.146653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:05.449 [2024-07-15 19:34:16.146659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.146662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.146669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.449 [2024-07-15 19:34:16.146680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490980, cid 0, qid 0 00:28:05.449 [2024-07-15 19:34:16.146684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490b00, cid 1, qid 0 00:28:05.449 [2024-07-15 19:34:16.146689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490c80, cid 2, qid 0 00:28:05.449 [2024-07-15 19:34:16.146693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.449 [2024-07-15 19:34:16.146697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490f80, cid 4, qid 0 00:28:05.449 [2024-07-15 19:34:16.146811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.146817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.146820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.146824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490f80) on tqpair=0x1424150 00:28:05.449 [2024-07-15 19:34:16.146828] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:05.449 [2024-07-15 19:34:16.146832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:05.449 [2024-07-15 19:34:16.146842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.146846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.146852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.449 [2024-07-15 19:34:16.146861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490f80, cid 4, qid 0 00:28:05.449 [2024-07-15 19:34:16.146947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.449 [2024-07-15 19:34:16.146952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.449 [2024-07-15 19:34:16.146955] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.146959] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1424150): datao=0, datal=4096, cccid=4 00:28:05.449 [2024-07-15 19:34:16.146963] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1490f80) on tqpair(0x1424150): expected_datao=0, payload_size=4096 00:28:05.449 [2024-07-15 19:34:16.146969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.146997] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.147049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.147052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490f80) on tqpair=0x1424150 00:28:05.449 [2024-07-15 19:34:16.147066] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:05.449 [2024-07-15 19:34:16.147086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.147096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.449 [2024-07-15 19:34:16.147102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.147108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.147114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.449 [2024-07-15 19:34:16.147130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490f80, cid 4, qid 0 00:28:05.449 [2024-07-15 19:34:16.147135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1491100, cid 5, qid 0 00:28:05.449 [2024-07-15 19:34:16.151232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.449 [2024-07-15 19:34:16.151239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.449 [2024-07-15 19:34:16.151242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.151245] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1424150): datao=0, datal=1024, cccid=4 00:28:05.449 [2024-07-15 19:34:16.151250] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1490f80) on tqpair(0x1424150): expected_datao=0, payload_size=1024 00:28:05.449 [2024-07-15 19:34:16.151254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.151259] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.151263] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.151268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.151273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.151276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.151279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1491100) on tqpair=0x1424150 00:28:05.449 [2024-07-15 19:34:16.190232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.190242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.190245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.190249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490f80) on tqpair=0x1424150 00:28:05.449 [2024-07-15 19:34:16.190259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.190262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.190269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.449 [2024-07-15 19:34:16.190288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490f80, cid 4, qid 0 00:28:05.449 [2024-07-15 19:34:16.190459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.449 [2024-07-15 19:34:16.190465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.449 [2024-07-15 19:34:16.190468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.190472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1424150): datao=0, datal=3072, cccid=4 00:28:05.449 [2024-07-15 19:34:16.190476] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1490f80) on tqpair(0x1424150): expected_datao=0, payload_size=3072 00:28:05.449 [2024-07-15 19:34:16.190480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.190507] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.190511] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.231393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.231396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490f80) on tqpair=0x1424150 00:28:05.449 [2024-07-15 19:34:16.231409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1424150) 00:28:05.449 [2024-07-15 19:34:16.231420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.449 [2024-07-15 19:34:16.231435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490f80, cid 4, qid 0 00:28:05.449 [2024-07-15 19:34:16.231523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.449 [2024-07-15 19:34:16.231529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.449 [2024-07-15 19:34:16.231532] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231535] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1424150): datao=0, datal=8, cccid=4 00:28:05.449 [2024-07-15 19:34:16.231539] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1490f80) on tqpair(0x1424150): expected_datao=0, payload_size=8 00:28:05.449 [2024-07-15 19:34:16.231543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.231552] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.272376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.449 [2024-07-15 19:34:16.272388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.449 [2024-07-15 19:34:16.272391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.449 [2024-07-15 19:34:16.272395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490f80) on tqpair=0x1424150 00:28:05.449 ===================================================== 00:28:05.449 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:05.449 ===================================================== 00:28:05.449 Controller Capabilities/Features 00:28:05.449 ================================ 00:28:05.449 Vendor ID: 0000 00:28:05.449 Subsystem Vendor ID: 0000 00:28:05.449 Serial Number: .................... 00:28:05.449 Model Number: ........................................ 00:28:05.449 Firmware Version: 24.09 00:28:05.449 Recommended Arb Burst: 0 00:28:05.449 IEEE OUI Identifier: 00 00 00 00:28:05.449 Multi-path I/O 00:28:05.449 May have multiple subsystem ports: No 00:28:05.449 May have multiple controllers: No 00:28:05.449 Associated with SR-IOV VF: No 00:28:05.449 Max Data Transfer Size: 131072 00:28:05.449 Max Number of Namespaces: 0 00:28:05.449 Max Number of I/O Queues: 1024 00:28:05.449 NVMe Specification Version (VS): 1.3 00:28:05.449 NVMe Specification Version (Identify): 1.3 00:28:05.449 Maximum Queue Entries: 128 00:28:05.449 Contiguous Queues Required: Yes 00:28:05.449 Arbitration Mechanisms Supported 00:28:05.449 Weighted Round Robin: Not Supported 00:28:05.449 Vendor Specific: Not Supported 00:28:05.449 Reset Timeout: 15000 ms 00:28:05.449 Doorbell Stride: 4 bytes 00:28:05.449 NVM Subsystem Reset: Not Supported 00:28:05.449 Command Sets Supported 00:28:05.449 NVM Command Set: Supported 00:28:05.449 Boot Partition: Not Supported 00:28:05.449 Memory Page Size Minimum: 4096 bytes 00:28:05.449 Memory Page Size Maximum: 4096 bytes 00:28:05.449 Persistent Memory Region: Not Supported 00:28:05.449 Optional Asynchronous Events Supported 00:28:05.449 Namespace Attribute Notices: Not Supported 00:28:05.450 Firmware Activation Notices: Not Supported 00:28:05.450 ANA Change Notices: Not Supported 00:28:05.450 PLE Aggregate Log Change Notices: Not Supported 00:28:05.450 LBA Status Info Alert Notices: Not Supported 00:28:05.450 EGE Aggregate Log Change Notices: Not Supported 00:28:05.450 Normal NVM Subsystem Shutdown event: Not Supported 00:28:05.450 Zone Descriptor Change Notices: Not Supported 00:28:05.450 Discovery Log Change Notices: Supported 00:28:05.450 Controller Attributes 00:28:05.450 128-bit Host Identifier: Not Supported 00:28:05.450 Non-Operational Permissive Mode: Not Supported 00:28:05.450 NVM Sets: Not Supported 00:28:05.450 Read Recovery Levels: Not Supported 00:28:05.450 Endurance Groups: Not Supported 00:28:05.450 Predictable Latency Mode: Not Supported 00:28:05.450 Traffic Based Keep ALive: Not Supported 00:28:05.450 Namespace Granularity: Not Supported 00:28:05.450 SQ Associations: Not Supported 00:28:05.450 UUID List: Not Supported 00:28:05.450 Multi-Domain Subsystem: Not Supported 00:28:05.450 Fixed Capacity Management: Not Supported 00:28:05.450 Variable Capacity Management: Not Supported 00:28:05.450 Delete Endurance Group: Not Supported 00:28:05.450 Delete NVM Set: Not Supported 00:28:05.450 Extended LBA Formats Supported: Not Supported 00:28:05.450 Flexible Data Placement Supported: Not Supported 00:28:05.450 00:28:05.450 Controller Memory Buffer Support 00:28:05.450 ================================ 00:28:05.450 Supported: No 00:28:05.450 00:28:05.450 Persistent Memory Region Support 00:28:05.450 ================================ 00:28:05.450 Supported: No 00:28:05.450 00:28:05.450 Admin Command Set Attributes 00:28:05.450 ============================ 00:28:05.450 Security Send/Receive: Not Supported 00:28:05.450 Format NVM: Not Supported 00:28:05.450 Firmware Activate/Download: Not Supported 00:28:05.450 Namespace Management: Not Supported 00:28:05.450 Device Self-Test: Not Supported 00:28:05.450 Directives: Not Supported 00:28:05.450 NVMe-MI: Not Supported 00:28:05.450 Virtualization Management: Not Supported 00:28:05.450 Doorbell Buffer Config: Not Supported 00:28:05.450 Get LBA Status Capability: Not Supported 00:28:05.450 Command & Feature Lockdown Capability: Not Supported 00:28:05.450 Abort Command Limit: 1 00:28:05.450 Async Event Request Limit: 4 00:28:05.450 Number of Firmware Slots: N/A 00:28:05.450 Firmware Slot 1 Read-Only: N/A 00:28:05.450 Firmware Activation Without Reset: N/A 00:28:05.450 Multiple Update Detection Support: N/A 00:28:05.450 Firmware Update Granularity: No Information Provided 00:28:05.450 Per-Namespace SMART Log: No 00:28:05.450 Asymmetric Namespace Access Log Page: Not Supported 00:28:05.450 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:05.450 Command Effects Log Page: Not Supported 00:28:05.450 Get Log Page Extended Data: Supported 00:28:05.450 Telemetry Log Pages: Not Supported 00:28:05.450 Persistent Event Log Pages: Not Supported 00:28:05.450 Supported Log Pages Log Page: May Support 00:28:05.450 Commands Supported & Effects Log Page: Not Supported 00:28:05.450 Feature Identifiers & Effects Log Page:May Support 00:28:05.450 NVMe-MI Commands & Effects Log Page: May Support 00:28:05.450 Data Area 4 for Telemetry Log: Not Supported 00:28:05.450 Error Log Page Entries Supported: 128 00:28:05.450 Keep Alive: Not Supported 00:28:05.450 00:28:05.450 NVM Command Set Attributes 00:28:05.450 ========================== 00:28:05.450 Submission Queue Entry Size 00:28:05.450 Max: 1 00:28:05.450 Min: 1 00:28:05.450 Completion Queue Entry Size 00:28:05.450 Max: 1 00:28:05.450 Min: 1 00:28:05.450 Number of Namespaces: 0 00:28:05.450 Compare Command: Not Supported 00:28:05.450 Write Uncorrectable Command: Not Supported 00:28:05.450 Dataset Management Command: Not Supported 00:28:05.450 Write Zeroes Command: Not Supported 00:28:05.450 Set Features Save Field: Not Supported 00:28:05.450 Reservations: Not Supported 00:28:05.450 Timestamp: Not Supported 00:28:05.450 Copy: Not Supported 00:28:05.450 Volatile Write Cache: Not Present 00:28:05.450 Atomic Write Unit (Normal): 1 00:28:05.450 Atomic Write Unit (PFail): 1 00:28:05.450 Atomic Compare & Write Unit: 1 00:28:05.450 Fused Compare & Write: Supported 00:28:05.450 Scatter-Gather List 00:28:05.450 SGL Command Set: Supported 00:28:05.450 SGL Keyed: Supported 00:28:05.450 SGL Bit Bucket Descriptor: Not Supported 00:28:05.450 SGL Metadata Pointer: Not Supported 00:28:05.450 Oversized SGL: Not Supported 00:28:05.450 SGL Metadata Address: Not Supported 00:28:05.450 SGL Offset: Supported 00:28:05.450 Transport SGL Data Block: Not Supported 00:28:05.450 Replay Protected Memory Block: Not Supported 00:28:05.450 00:28:05.450 Firmware Slot Information 00:28:05.450 ========================= 00:28:05.450 Active slot: 0 00:28:05.450 00:28:05.450 00:28:05.450 Error Log 00:28:05.450 ========= 00:28:05.450 00:28:05.450 Active Namespaces 00:28:05.450 ================= 00:28:05.450 Discovery Log Page 00:28:05.450 ================== 00:28:05.450 Generation Counter: 2 00:28:05.450 Number of Records: 2 00:28:05.450 Record Format: 0 00:28:05.450 00:28:05.450 Discovery Log Entry 0 00:28:05.450 ---------------------- 00:28:05.450 Transport Type: 3 (TCP) 00:28:05.450 Address Family: 1 (IPv4) 00:28:05.450 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:05.450 Entry Flags: 00:28:05.450 Duplicate Returned Information: 1 00:28:05.450 Explicit Persistent Connection Support for Discovery: 1 00:28:05.450 Transport Requirements: 00:28:05.450 Secure Channel: Not Required 00:28:05.450 Port ID: 0 (0x0000) 00:28:05.450 Controller ID: 65535 (0xffff) 00:28:05.450 Admin Max SQ Size: 128 00:28:05.450 Transport Service Identifier: 4420 00:28:05.450 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:05.450 Transport Address: 10.0.0.2 00:28:05.450 Discovery Log Entry 1 00:28:05.450 ---------------------- 00:28:05.450 Transport Type: 3 (TCP) 00:28:05.450 Address Family: 1 (IPv4) 00:28:05.450 Subsystem Type: 2 (NVM Subsystem) 00:28:05.450 Entry Flags: 00:28:05.450 Duplicate Returned Information: 0 00:28:05.450 Explicit Persistent Connection Support for Discovery: 0 00:28:05.450 Transport Requirements: 00:28:05.450 Secure Channel: Not Required 00:28:05.450 Port ID: 0 (0x0000) 00:28:05.450 Controller ID: 65535 (0xffff) 00:28:05.450 Admin Max SQ Size: 128 00:28:05.450 Transport Service Identifier: 4420 00:28:05.450 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:05.450 Transport Address: 10.0.0.2 [2024-07-15 19:34:16.272472] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:05.450 [2024-07-15 19:34:16.272482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490980) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.450 [2024-07-15 19:34:16.272493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490b00) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.450 [2024-07-15 19:34:16.272501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490c80) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.450 [2024-07-15 19:34:16.272512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.450 [2024-07-15 19:34:16.272525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.450 [2024-07-15 19:34:16.272539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.450 [2024-07-15 19:34:16.272552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.450 [2024-07-15 19:34:16.272630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.450 [2024-07-15 19:34:16.272636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.450 [2024-07-15 19:34:16.272639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.450 [2024-07-15 19:34:16.272661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.450 [2024-07-15 19:34:16.272674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.450 [2024-07-15 19:34:16.272759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.450 [2024-07-15 19:34:16.272765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.450 [2024-07-15 19:34:16.272768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.450 [2024-07-15 19:34:16.272771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.450 [2024-07-15 19:34:16.272776] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:05.450 [2024-07-15 19:34:16.272780] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:05.451 [2024-07-15 19:34:16.272788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.272791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.272794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.272800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.272809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.272884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.272890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.272893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.272897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.272905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.272909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.272912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.272918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.272929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273016] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.273934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.273940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.273943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.273955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.273962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.273968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.273977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.274052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.274061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.274064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.274068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.451 [2024-07-15 19:34:16.274076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.274080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.451 [2024-07-15 19:34:16.274083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.451 [2024-07-15 19:34:16.274089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.451 [2024-07-15 19:34:16.274098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.451 [2024-07-15 19:34:16.274181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.451 [2024-07-15 19:34:16.274186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.451 [2024-07-15 19:34:16.274189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.274193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.452 [2024-07-15 19:34:16.274201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.274204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.274208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.452 [2024-07-15 19:34:16.274214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.452 [2024-07-15 19:34:16.274223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.452 [2024-07-15 19:34:16.278237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.452 [2024-07-15 19:34:16.278243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.452 [2024-07-15 19:34:16.278246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.278249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.452 [2024-07-15 19:34:16.278259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.278263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.278266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1424150) 00:28:05.452 [2024-07-15 19:34:16.278272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.452 [2024-07-15 19:34:16.278283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1490e00, cid 3, qid 0 00:28:05.452 [2024-07-15 19:34:16.278496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.452 [2024-07-15 19:34:16.278501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.452 [2024-07-15 19:34:16.278504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.452 [2024-07-15 19:34:16.278507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1490e00) on tqpair=0x1424150 00:28:05.452 [2024-07-15 19:34:16.278515] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:05.452 00:28:05.452 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:05.714 [2024-07-15 19:34:16.314586] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:05.714 [2024-07-15 19:34:16.314640] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752900 ] 00:28:05.714 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.714 [2024-07-15 19:34:16.328488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:05.714 [2024-07-15 19:34:16.344470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:05.714 [2024-07-15 19:34:16.344510] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:05.714 [2024-07-15 19:34:16.344515] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:05.714 [2024-07-15 19:34:16.344526] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:05.714 [2024-07-15 19:34:16.344531] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:05.714 [2024-07-15 19:34:16.344849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:05.714 [2024-07-15 19:34:16.344875] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x89c150 0 00:28:05.714 [2024-07-15 19:34:16.358232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:05.714 [2024-07-15 19:34:16.358244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:05.714 [2024-07-15 19:34:16.358248] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:05.714 [2024-07-15 19:34:16.358251] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:05.714 [2024-07-15 19:34:16.358280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.714 [2024-07-15 19:34:16.358285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.714 [2024-07-15 19:34:16.358288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.714 [2024-07-15 19:34:16.358299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:05.714 [2024-07-15 19:34:16.358314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.714 [2024-07-15 19:34:16.366236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.714 [2024-07-15 19:34:16.366245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.714 [2024-07-15 19:34:16.366248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.714 [2024-07-15 19:34:16.366252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.714 [2024-07-15 19:34:16.366263] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:05.714 [2024-07-15 19:34:16.366268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:05.714 [2024-07-15 19:34:16.366273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:05.714 [2024-07-15 19:34:16.366284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.714 [2024-07-15 19:34:16.366288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.714 [2024-07-15 19:34:16.366291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.714 [2024-07-15 19:34:16.366298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.714 [2024-07-15 19:34:16.366311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.714 [2024-07-15 19:34:16.366476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.366482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.366485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.366494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:05.715 [2024-07-15 19:34:16.366501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:05.715 [2024-07-15 19:34:16.366507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.366519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.366530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.366606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.366612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.366615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.366622] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:05.715 [2024-07-15 19:34:16.366629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.366635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.366647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.366656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.366732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.366737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.366740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.366747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.366755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.366768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.366776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.366849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.366854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.366857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.366864] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:05.715 [2024-07-15 19:34:16.366868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.366876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.366981] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:05.715 [2024-07-15 19:34:16.366985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.366991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.366997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.367003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.367012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.367086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.367092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.367095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.367102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:05.715 [2024-07-15 19:34:16.367110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.367122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.367131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.367201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.367207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.367210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.367217] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:05.715 [2024-07-15 19:34:16.367221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:05.715 [2024-07-15 19:34:16.367234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:05.715 [2024-07-15 19:34:16.367244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:05.715 [2024-07-15 19:34:16.367251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.367260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.715 [2024-07-15 19:34:16.367270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.367385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.715 [2024-07-15 19:34:16.367391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.715 [2024-07-15 19:34:16.367395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=4096, cccid=0 00:28:05.715 [2024-07-15 19:34:16.367403] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x908980) on tqpair(0x89c150): expected_datao=0, payload_size=4096 00:28:05.715 [2024-07-15 19:34:16.367406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367412] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367416] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.367443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.367446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.367456] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:05.715 [2024-07-15 19:34:16.367462] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:05.715 [2024-07-15 19:34:16.367466] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:05.715 [2024-07-15 19:34:16.367469] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:05.715 [2024-07-15 19:34:16.367473] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:05.715 [2024-07-15 19:34:16.367477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:05.715 [2024-07-15 19:34:16.367485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:05.715 [2024-07-15 19:34:16.367491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.367503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.715 [2024-07-15 19:34:16.367513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.715 [2024-07-15 19:34:16.367627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.715 [2024-07-15 19:34:16.367633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.715 [2024-07-15 19:34:16.367636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.715 [2024-07-15 19:34:16.367644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89c150) 00:28:05.715 [2024-07-15 19:34:16.367656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.715 [2024-07-15 19:34:16.367661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.715 [2024-07-15 19:34:16.367668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.367672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.716 [2024-07-15 19:34:16.367679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.367690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.716 [2024-07-15 19:34:16.367695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.367706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.716 [2024-07-15 19:34:16.367710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.367720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.367725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.367734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.716 [2024-07-15 19:34:16.367745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908980, cid 0, qid 0 00:28:05.716 [2024-07-15 19:34:16.367749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908b00, cid 1, qid 0 00:28:05.716 [2024-07-15 19:34:16.367753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908c80, cid 2, qid 0 00:28:05.716 [2024-07-15 19:34:16.367758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.716 [2024-07-15 19:34:16.367762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.716 [2024-07-15 19:34:16.367867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.716 [2024-07-15 19:34:16.367873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.716 [2024-07-15 19:34:16.367876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.716 [2024-07-15 19:34:16.367883] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:05.716 [2024-07-15 19:34:16.367887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.367894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.367899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.367905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.367911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.367917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:05.716 [2024-07-15 19:34:16.367926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.716 [2024-07-15 19:34:16.368000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.716 [2024-07-15 19:34:16.368005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.716 [2024-07-15 19:34:16.368010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.716 [2024-07-15 19:34:16.368063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.368087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.716 [2024-07-15 19:34:16.368096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.716 [2024-07-15 19:34:16.368183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.716 [2024-07-15 19:34:16.368189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.716 [2024-07-15 19:34:16.368192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=4096, cccid=4 00:28:05.716 [2024-07-15 19:34:16.368199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x908f80) on tqpair(0x89c150): expected_datao=0, payload_size=4096 00:28:05.716 [2024-07-15 19:34:16.368203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368209] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368212] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.716 [2024-07-15 19:34:16.368246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.716 [2024-07-15 19:34:16.368249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.716 [2024-07-15 19:34:16.368260] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:05.716 [2024-07-15 19:34:16.368268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.368291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.716 [2024-07-15 19:34:16.368302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.716 [2024-07-15 19:34:16.368437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.716 [2024-07-15 19:34:16.368442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.716 [2024-07-15 19:34:16.368445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=4096, cccid=4 00:28:05.716 [2024-07-15 19:34:16.368452] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x908f80) on tqpair(0x89c150): expected_datao=0, payload_size=4096 00:28:05.716 [2024-07-15 19:34:16.368456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368461] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368468] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.716 [2024-07-15 19:34:16.368521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.716 [2024-07-15 19:34:16.368524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.716 [2024-07-15 19:34:16.368537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.368560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.716 [2024-07-15 19:34:16.368571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.716 [2024-07-15 19:34:16.368660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.716 [2024-07-15 19:34:16.368665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.716 [2024-07-15 19:34:16.368668] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368671] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=4096, cccid=4 00:28:05.716 [2024-07-15 19:34:16.368675] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x908f80) on tqpair(0x89c150): expected_datao=0, payload_size=4096 00:28:05.716 [2024-07-15 19:34:16.368679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368684] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368687] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.716 [2024-07-15 19:34:16.368724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.716 [2024-07-15 19:34:16.368727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.716 [2024-07-15 19:34:16.368736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368768] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:05.716 [2024-07-15 19:34:16.368772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:05.716 [2024-07-15 19:34:16.368776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:05.716 [2024-07-15 19:34:16.368790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.716 [2024-07-15 19:34:16.368794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.716 [2024-07-15 19:34:16.368799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.368805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.368808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.368811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.368816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.717 [2024-07-15 19:34:16.368828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.717 [2024-07-15 19:34:16.368833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909100, cid 5, qid 0 00:28:05.717 [2024-07-15 19:34:16.372233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.372240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.372243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.372253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.372257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.372260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909100) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.372272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909100, cid 5, qid 0 00:28:05.717 [2024-07-15 19:34:16.372457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.372462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.372465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909100) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.372476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909100, cid 5, qid 0 00:28:05.717 [2024-07-15 19:34:16.372568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.372573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.372576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909100) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.372587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909100, cid 5, qid 0 00:28:05.717 [2024-07-15 19:34:16.372684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.372690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.372692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909100) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.372708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x89c150) 00:28:05.717 [2024-07-15 19:34:16.372761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.717 [2024-07-15 19:34:16.372772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909100, cid 5, qid 0 00:28:05.717 [2024-07-15 19:34:16.372776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908f80, cid 4, qid 0 00:28:05.717 [2024-07-15 19:34:16.372780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909280, cid 6, qid 0 00:28:05.717 [2024-07-15 19:34:16.372784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909400, cid 7, qid 0 00:28:05.717 [2024-07-15 19:34:16.372981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.717 [2024-07-15 19:34:16.372986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.717 [2024-07-15 19:34:16.372989] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.372992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=8192, cccid=5 00:28:05.717 [2024-07-15 19:34:16.372997] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x909100) on tqpair(0x89c150): expected_datao=0, payload_size=8192 00:28:05.717 [2024-07-15 19:34:16.373000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373085] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.717 [2024-07-15 19:34:16.373094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.717 [2024-07-15 19:34:16.373097] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373101] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=512, cccid=4 00:28:05.717 [2024-07-15 19:34:16.373106] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x908f80) on tqpair(0x89c150): expected_datao=0, payload_size=512 00:28:05.717 [2024-07-15 19:34:16.373110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373115] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373118] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.717 [2024-07-15 19:34:16.373128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.717 [2024-07-15 19:34:16.373131] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373134] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=512, cccid=6 00:28:05.717 [2024-07-15 19:34:16.373137] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x909280) on tqpair(0x89c150): expected_datao=0, payload_size=512 00:28:05.717 [2024-07-15 19:34:16.373141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373146] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373150] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:05.717 [2024-07-15 19:34:16.373159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:05.717 [2024-07-15 19:34:16.373162] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373165] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89c150): datao=0, datal=4096, cccid=7 00:28:05.717 [2024-07-15 19:34:16.373169] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x909400) on tqpair(0x89c150): expected_datao=0, payload_size=4096 00:28:05.717 [2024-07-15 19:34:16.373172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373181] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.373193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.373196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909100) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.373210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.373215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.373218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908f80) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.373236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.373241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.373245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909280) on tqpair=0x89c150 00:28:05.717 [2024-07-15 19:34:16.373253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.717 [2024-07-15 19:34:16.373258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.717 [2024-07-15 19:34:16.373261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.717 [2024-07-15 19:34:16.373264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909400) on tqpair=0x89c150 00:28:05.717 ===================================================== 00:28:05.717 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.717 ===================================================== 00:28:05.717 Controller Capabilities/Features 00:28:05.717 ================================ 00:28:05.717 Vendor ID: 8086 00:28:05.717 Subsystem Vendor ID: 8086 00:28:05.717 Serial Number: SPDK00000000000001 00:28:05.717 Model Number: SPDK bdev Controller 00:28:05.717 Firmware Version: 24.09 00:28:05.717 Recommended Arb Burst: 6 00:28:05.717 IEEE OUI Identifier: e4 d2 5c 00:28:05.718 Multi-path I/O 00:28:05.718 May have multiple subsystem ports: Yes 00:28:05.718 May have multiple controllers: Yes 00:28:05.718 Associated with SR-IOV VF: No 00:28:05.718 Max Data Transfer Size: 131072 00:28:05.718 Max Number of Namespaces: 32 00:28:05.718 Max Number of I/O Queues: 127 00:28:05.718 NVMe Specification Version (VS): 1.3 00:28:05.718 NVMe Specification Version (Identify): 1.3 00:28:05.718 Maximum Queue Entries: 128 00:28:05.718 Contiguous Queues Required: Yes 00:28:05.718 Arbitration Mechanisms Supported 00:28:05.718 Weighted Round Robin: Not Supported 00:28:05.718 Vendor Specific: Not Supported 00:28:05.718 Reset Timeout: 15000 ms 00:28:05.718 Doorbell Stride: 4 bytes 00:28:05.718 NVM Subsystem Reset: Not Supported 00:28:05.718 Command Sets Supported 00:28:05.718 NVM Command Set: Supported 00:28:05.718 Boot Partition: Not Supported 00:28:05.718 Memory Page Size Minimum: 4096 bytes 00:28:05.718 Memory Page Size Maximum: 4096 bytes 00:28:05.718 Persistent Memory Region: Not Supported 00:28:05.718 Optional Asynchronous Events Supported 00:28:05.718 Namespace Attribute Notices: Supported 00:28:05.718 Firmware Activation Notices: Not Supported 00:28:05.718 ANA Change Notices: Not Supported 00:28:05.718 PLE Aggregate Log Change Notices: Not Supported 00:28:05.718 LBA Status Info Alert Notices: Not Supported 00:28:05.718 EGE Aggregate Log Change Notices: Not Supported 00:28:05.718 Normal NVM Subsystem Shutdown event: Not Supported 00:28:05.718 Zone Descriptor Change Notices: Not Supported 00:28:05.718 Discovery Log Change Notices: Not Supported 00:28:05.718 Controller Attributes 00:28:05.718 128-bit Host Identifier: Supported 00:28:05.718 Non-Operational Permissive Mode: Not Supported 00:28:05.718 NVM Sets: Not Supported 00:28:05.718 Read Recovery Levels: Not Supported 00:28:05.718 Endurance Groups: Not Supported 00:28:05.718 Predictable Latency Mode: Not Supported 00:28:05.718 Traffic Based Keep ALive: Not Supported 00:28:05.718 Namespace Granularity: Not Supported 00:28:05.718 SQ Associations: Not Supported 00:28:05.718 UUID List: Not Supported 00:28:05.718 Multi-Domain Subsystem: Not Supported 00:28:05.718 Fixed Capacity Management: Not Supported 00:28:05.718 Variable Capacity Management: Not Supported 00:28:05.718 Delete Endurance Group: Not Supported 00:28:05.718 Delete NVM Set: Not Supported 00:28:05.718 Extended LBA Formats Supported: Not Supported 00:28:05.718 Flexible Data Placement Supported: Not Supported 00:28:05.718 00:28:05.718 Controller Memory Buffer Support 00:28:05.718 ================================ 00:28:05.718 Supported: No 00:28:05.718 00:28:05.718 Persistent Memory Region Support 00:28:05.718 ================================ 00:28:05.718 Supported: No 00:28:05.718 00:28:05.718 Admin Command Set Attributes 00:28:05.718 ============================ 00:28:05.718 Security Send/Receive: Not Supported 00:28:05.718 Format NVM: Not Supported 00:28:05.718 Firmware Activate/Download: Not Supported 00:28:05.718 Namespace Management: Not Supported 00:28:05.718 Device Self-Test: Not Supported 00:28:05.718 Directives: Not Supported 00:28:05.718 NVMe-MI: Not Supported 00:28:05.718 Virtualization Management: Not Supported 00:28:05.718 Doorbell Buffer Config: Not Supported 00:28:05.718 Get LBA Status Capability: Not Supported 00:28:05.718 Command & Feature Lockdown Capability: Not Supported 00:28:05.718 Abort Command Limit: 4 00:28:05.718 Async Event Request Limit: 4 00:28:05.718 Number of Firmware Slots: N/A 00:28:05.718 Firmware Slot 1 Read-Only: N/A 00:28:05.718 Firmware Activation Without Reset: N/A 00:28:05.718 Multiple Update Detection Support: N/A 00:28:05.718 Firmware Update Granularity: No Information Provided 00:28:05.718 Per-Namespace SMART Log: No 00:28:05.718 Asymmetric Namespace Access Log Page: Not Supported 00:28:05.718 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:05.718 Command Effects Log Page: Supported 00:28:05.718 Get Log Page Extended Data: Supported 00:28:05.718 Telemetry Log Pages: Not Supported 00:28:05.718 Persistent Event Log Pages: Not Supported 00:28:05.718 Supported Log Pages Log Page: May Support 00:28:05.718 Commands Supported & Effects Log Page: Not Supported 00:28:05.718 Feature Identifiers & Effects Log Page:May Support 00:28:05.718 NVMe-MI Commands & Effects Log Page: May Support 00:28:05.718 Data Area 4 for Telemetry Log: Not Supported 00:28:05.718 Error Log Page Entries Supported: 128 00:28:05.718 Keep Alive: Supported 00:28:05.718 Keep Alive Granularity: 10000 ms 00:28:05.718 00:28:05.718 NVM Command Set Attributes 00:28:05.718 ========================== 00:28:05.718 Submission Queue Entry Size 00:28:05.718 Max: 64 00:28:05.718 Min: 64 00:28:05.718 Completion Queue Entry Size 00:28:05.718 Max: 16 00:28:05.718 Min: 16 00:28:05.718 Number of Namespaces: 32 00:28:05.718 Compare Command: Supported 00:28:05.718 Write Uncorrectable Command: Not Supported 00:28:05.718 Dataset Management Command: Supported 00:28:05.718 Write Zeroes Command: Supported 00:28:05.718 Set Features Save Field: Not Supported 00:28:05.718 Reservations: Supported 00:28:05.718 Timestamp: Not Supported 00:28:05.718 Copy: Supported 00:28:05.718 Volatile Write Cache: Present 00:28:05.718 Atomic Write Unit (Normal): 1 00:28:05.718 Atomic Write Unit (PFail): 1 00:28:05.718 Atomic Compare & Write Unit: 1 00:28:05.718 Fused Compare & Write: Supported 00:28:05.718 Scatter-Gather List 00:28:05.718 SGL Command Set: Supported 00:28:05.718 SGL Keyed: Supported 00:28:05.718 SGL Bit Bucket Descriptor: Not Supported 00:28:05.718 SGL Metadata Pointer: Not Supported 00:28:05.718 Oversized SGL: Not Supported 00:28:05.718 SGL Metadata Address: Not Supported 00:28:05.718 SGL Offset: Supported 00:28:05.718 Transport SGL Data Block: Not Supported 00:28:05.718 Replay Protected Memory Block: Not Supported 00:28:05.718 00:28:05.718 Firmware Slot Information 00:28:05.718 ========================= 00:28:05.718 Active slot: 1 00:28:05.718 Slot 1 Firmware Revision: 24.09 00:28:05.718 00:28:05.718 00:28:05.718 Commands Supported and Effects 00:28:05.718 ============================== 00:28:05.718 Admin Commands 00:28:05.718 -------------- 00:28:05.718 Get Log Page (02h): Supported 00:28:05.718 Identify (06h): Supported 00:28:05.718 Abort (08h): Supported 00:28:05.718 Set Features (09h): Supported 00:28:05.718 Get Features (0Ah): Supported 00:28:05.718 Asynchronous Event Request (0Ch): Supported 00:28:05.718 Keep Alive (18h): Supported 00:28:05.718 I/O Commands 00:28:05.718 ------------ 00:28:05.718 Flush (00h): Supported LBA-Change 00:28:05.718 Write (01h): Supported LBA-Change 00:28:05.718 Read (02h): Supported 00:28:05.718 Compare (05h): Supported 00:28:05.718 Write Zeroes (08h): Supported LBA-Change 00:28:05.718 Dataset Management (09h): Supported LBA-Change 00:28:05.718 Copy (19h): Supported LBA-Change 00:28:05.718 00:28:05.718 Error Log 00:28:05.718 ========= 00:28:05.718 00:28:05.718 Arbitration 00:28:05.718 =========== 00:28:05.718 Arbitration Burst: 1 00:28:05.718 00:28:05.718 Power Management 00:28:05.718 ================ 00:28:05.718 Number of Power States: 1 00:28:05.718 Current Power State: Power State #0 00:28:05.718 Power State #0: 00:28:05.718 Max Power: 0.00 W 00:28:05.718 Non-Operational State: Operational 00:28:05.718 Entry Latency: Not Reported 00:28:05.718 Exit Latency: Not Reported 00:28:05.718 Relative Read Throughput: 0 00:28:05.718 Relative Read Latency: 0 00:28:05.718 Relative Write Throughput: 0 00:28:05.718 Relative Write Latency: 0 00:28:05.718 Idle Power: Not Reported 00:28:05.718 Active Power: Not Reported 00:28:05.718 Non-Operational Permissive Mode: Not Supported 00:28:05.718 00:28:05.718 Health Information 00:28:05.718 ================== 00:28:05.718 Critical Warnings: 00:28:05.718 Available Spare Space: OK 00:28:05.718 Temperature: OK 00:28:05.718 Device Reliability: OK 00:28:05.718 Read Only: No 00:28:05.718 Volatile Memory Backup: OK 00:28:05.718 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:05.718 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:05.718 Available Spare: 0% 00:28:05.718 Available Spare Threshold: 0% 00:28:05.718 Life Percentage Used:[2024-07-15 19:34:16.373347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.718 [2024-07-15 19:34:16.373352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x89c150) 00:28:05.718 [2024-07-15 19:34:16.373357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.718 [2024-07-15 19:34:16.373371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x909400, cid 7, qid 0 00:28:05.718 [2024-07-15 19:34:16.373490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.718 [2024-07-15 19:34:16.373496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.718 [2024-07-15 19:34:16.373499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.718 [2024-07-15 19:34:16.373502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x909400) on tqpair=0x89c150 00:28:05.718 [2024-07-15 19:34:16.373532] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:05.719 [2024-07-15 19:34:16.373540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908980) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.719 [2024-07-15 19:34:16.373550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908b00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.719 [2024-07-15 19:34:16.373558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908c80) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.719 [2024-07-15 19:34:16.373566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.719 [2024-07-15 19:34:16.373577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.373589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.373601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.373681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.373687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.373690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.373710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.373722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.373806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.373811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.373814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373821] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:05.719 [2024-07-15 19:34:16.373825] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:05.719 [2024-07-15 19:34:16.373834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.373846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.373856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.373929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.373934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.373937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.373948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.373955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.373960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.373969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.374050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.374053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.374064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.374076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.374085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.374164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.374167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.374178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.374190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.374199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.374282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.374285] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.374296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.374310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.374319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.374402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.374405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.374416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.374428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.374437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.719 [2024-07-15 19:34:16.374518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.719 [2024-07-15 19:34:16.374521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.719 [2024-07-15 19:34:16.374532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.719 [2024-07-15 19:34:16.374538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.719 [2024-07-15 19:34:16.374544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.719 [2024-07-15 19:34:16.374553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.719 [2024-07-15 19:34:16.374629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.374634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.374637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.374648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.374660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.374669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.374744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.374750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.374753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.374764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.374777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.374786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.374859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.374865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.374868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.374878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.374891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.374900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.374975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.374980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.374983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.374994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.374998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.375907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.375912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.375915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.375926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.375933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.720 [2024-07-15 19:34:16.375938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.720 [2024-07-15 19:34:16.375947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.720 [2024-07-15 19:34:16.376025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.720 [2024-07-15 19:34:16.376030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.720 [2024-07-15 19:34:16.376033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.376036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.720 [2024-07-15 19:34:16.376044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.376048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.720 [2024-07-15 19:34:16.376051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.376887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.376962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.376968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.376971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.376982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.376988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.376994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.377002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.377082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.377088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.377091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.377094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.377102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.377106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.377109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.377114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.377123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.377197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.377202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.377208] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.377211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.377219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.377222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.381233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89c150) 00:28:05.721 [2024-07-15 19:34:16.381240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.721 [2024-07-15 19:34:16.381252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x908e00, cid 3, qid 0 00:28:05.721 [2024-07-15 19:34:16.381413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:05.721 [2024-07-15 19:34:16.381419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:05.721 [2024-07-15 19:34:16.381422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:05.721 [2024-07-15 19:34:16.381425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x908e00) on tqpair=0x89c150 00:28:05.721 [2024-07-15 19:34:16.381432] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:05.721 0% 00:28:05.721 Data Units Read: 0 00:28:05.721 Data Units Written: 0 00:28:05.721 Host Read Commands: 0 00:28:05.721 Host Write Commands: 0 00:28:05.721 Controller Busy Time: 0 minutes 00:28:05.721 Power Cycles: 0 00:28:05.721 Power On Hours: 0 hours 00:28:05.721 Unsafe Shutdowns: 0 00:28:05.721 Unrecoverable Media Errors: 0 00:28:05.721 Lifetime Error Log Entries: 0 00:28:05.721 Warning Temperature Time: 0 minutes 00:28:05.721 Critical Temperature Time: 0 minutes 00:28:05.721 00:28:05.721 Number of Queues 00:28:05.721 ================ 00:28:05.721 Number of I/O Submission Queues: 127 00:28:05.721 Number of I/O Completion Queues: 127 00:28:05.721 00:28:05.721 Active Namespaces 00:28:05.721 ================= 00:28:05.721 Namespace ID:1 00:28:05.721 Error Recovery Timeout: Unlimited 00:28:05.721 Command Set Identifier: NVM (00h) 00:28:05.721 Deallocate: Supported 00:28:05.721 Deallocated/Unwritten Error: Not Supported 00:28:05.721 Deallocated Read Value: Unknown 00:28:05.721 Deallocate in Write Zeroes: Not Supported 00:28:05.721 Deallocated Guard Field: 0xFFFF 00:28:05.721 Flush: Supported 00:28:05.721 Reservation: Supported 00:28:05.721 Namespace Sharing Capabilities: Multiple Controllers 00:28:05.721 Size (in LBAs): 131072 (0GiB) 00:28:05.721 Capacity (in LBAs): 131072 (0GiB) 00:28:05.722 Utilization (in LBAs): 131072 (0GiB) 00:28:05.722 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:05.722 EUI64: ABCDEF0123456789 00:28:05.722 UUID: f96d990f-a94d-48d6-bacf-a2776a18da8f 00:28:05.722 Thin Provisioning: Not Supported 00:28:05.722 Per-NS Atomic Units: Yes 00:28:05.722 Atomic Boundary Size (Normal): 0 00:28:05.722 Atomic Boundary Size (PFail): 0 00:28:05.722 Atomic Boundary Offset: 0 00:28:05.722 Maximum Single Source Range Length: 65535 00:28:05.722 Maximum Copy Length: 65535 00:28:05.722 Maximum Source Range Count: 1 00:28:05.722 NGUID/EUI64 Never Reused: No 00:28:05.722 Namespace Write Protected: No 00:28:05.722 Number of LBA Formats: 1 00:28:05.722 Current LBA Format: LBA Format #00 00:28:05.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:05.722 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.722 rmmod nvme_tcp 00:28:05.722 rmmod nvme_fabrics 00:28:05.722 rmmod nvme_keyring 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1752769 ']' 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1752769 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1752769 ']' 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1752769 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752769 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752769' 00:28:05.722 killing process with pid 1752769 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1752769 00:28:05.722 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1752769 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.979 19:34:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.515 19:34:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.515 00:28:08.515 real 0m8.822s 00:28:08.515 user 0m5.064s 00:28:08.515 sys 0m4.557s 00:28:08.515 19:34:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.515 19:34:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:08.515 ************************************ 00:28:08.515 END TEST nvmf_identify 00:28:08.515 ************************************ 00:28:08.515 19:34:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:08.515 19:34:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.515 19:34:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:08.515 19:34:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.515 19:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.515 ************************************ 00:28:08.515 START TEST nvmf_perf 00:28:08.515 ************************************ 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:08.515 * Looking for test storage... 00:28:08.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.515 19:34:18 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.516 19:34:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.813 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.813 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.813 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.813 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.813 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:13.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:13.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:13.814 Found net devices under 0000:86:00.0: cvl_0_0 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:13.814 Found net devices under 0000:86:00.1: cvl_0_1 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.814 19:34:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:13.814 00:28:13.814 --- 10.0.0.2 ping statistics --- 00:28:13.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.814 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:28:13.814 00:28:13.814 --- 10.0.0.1 ping statistics --- 00:28:13.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.814 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1756219 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1756219 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1756219 ']' 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.814 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.814 [2024-07-15 19:34:24.177526] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:28:13.814 [2024-07-15 19:34:24.177571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.814 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.814 [2024-07-15 19:34:24.206504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:13.814 [2024-07-15 19:34:24.234661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.814 [2024-07-15 19:34:24.276778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.814 [2024-07-15 19:34:24.276814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.814 [2024-07-15 19:34:24.276821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.814 [2024-07-15 19:34:24.276827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.815 [2024-07-15 19:34:24.276833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.815 [2024-07-15 19:34:24.276869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.815 [2024-07-15 19:34:24.276966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.815 [2024-07-15 19:34:24.277055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.815 [2024-07-15 19:34:24.277056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:13.815 19:34:24 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:17.093 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:17.350 [2024-07-15 19:34:27.967164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.350 19:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.350 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:17.350 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.607 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:17.607 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:17.865 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.122 [2024-07-15 19:34:28.726000] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.122 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:18.122 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:28:18.122 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:18.122 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:18.122 19:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:19.511 Initializing NVMe Controllers 00:28:19.511 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:28:19.511 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:28:19.511 Initialization complete. Launching workers. 00:28:19.511 ======================================================== 00:28:19.511 Latency(us) 00:28:19.511 Device Information : IOPS MiB/s Average min max 00:28:19.511 PCIE (0000:5e:00.0) NSID 1 from core 0: 97281.10 380.00 328.42 44.42 7213.35 00:28:19.511 ======================================================== 00:28:19.511 Total : 97281.10 380.00 328.42 44.42 7213.35 00:28:19.511 00:28:19.511 19:34:30 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.511 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.881 Initializing NVMe Controllers 00:28:20.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:20.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:20.881 Initialization complete. Launching workers. 00:28:20.881 ======================================================== 00:28:20.881 Latency(us) 00:28:20.881 Device Information : IOPS MiB/s Average min max 00:28:20.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.00 0.25 16340.55 150.20 45631.17 00:28:20.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16483.55 7957.25 47888.78 00:28:20.881 ======================================================== 00:28:20.881 Total : 124.00 0.48 16410.90 150.20 47888.78 00:28:20.881 00:28:20.881 19:34:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.881 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.255 Initializing NVMe Controllers 00:28:22.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:22.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:22.255 Initialization complete. Launching workers. 00:28:22.255 ======================================================== 00:28:22.255 Latency(us) 00:28:22.255 Device Information : IOPS MiB/s Average min max 00:28:22.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10740.99 41.96 2978.68 359.25 6321.91 00:28:22.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3870.00 15.12 8313.33 7109.29 16017.94 00:28:22.255 ======================================================== 00:28:22.255 Total : 14610.99 57.07 4391.67 359.25 16017.94 00:28:22.255 00:28:22.255 19:34:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:22.255 19:34:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:22.255 19:34:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:22.255 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.784 Initializing NVMe Controllers 00:28:24.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.784 Controller IO queue size 128, less than required. 00:28:24.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.784 Controller IO queue size 128, less than required. 00:28:24.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.784 Initialization complete. Launching workers. 00:28:24.784 ======================================================== 00:28:24.784 Latency(us) 00:28:24.784 Device Information : IOPS MiB/s Average min max 00:28:24.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1188.73 297.18 110290.74 65004.37 159008.72 00:28:24.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 595.87 148.97 223270.62 61437.24 313250.92 00:28:24.784 ======================================================== 00:28:24.784 Total : 1784.60 446.15 148013.94 61437.24 313250.92 00:28:24.784 00:28:24.784 19:34:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:24.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.784 No valid NVMe controllers or AIO or URING devices found 00:28:24.784 Initializing NVMe Controllers 00:28:24.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.784 Controller IO queue size 128, less than required. 00:28:24.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:24.784 Controller IO queue size 128, less than required. 00:28:24.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:24.784 WARNING: Some requested NVMe devices were skipped 00:28:24.784 19:34:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:24.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.314 Initializing NVMe Controllers 00:28:27.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.314 Controller IO queue size 128, less than required. 00:28:27.314 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.314 Controller IO queue size 128, less than required. 00:28:27.314 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.314 Initialization complete. Launching workers. 00:28:27.314 00:28:27.314 ==================== 00:28:27.314 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:27.314 TCP transport: 00:28:27.314 polls: 31461 00:28:27.314 idle_polls: 11722 00:28:27.314 sock_completions: 19739 00:28:27.314 nvme_completions: 4941 00:28:27.314 submitted_requests: 7478 00:28:27.314 queued_requests: 1 00:28:27.314 00:28:27.314 ==================== 00:28:27.314 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:27.314 TCP transport: 00:28:27.314 polls: 34953 00:28:27.314 idle_polls: 14087 00:28:27.314 sock_completions: 20866 00:28:27.314 nvme_completions: 5053 00:28:27.314 submitted_requests: 7562 00:28:27.314 queued_requests: 1 00:28:27.314 ======================================================== 00:28:27.314 Latency(us) 00:28:27.314 Device Information : IOPS MiB/s Average min max 00:28:27.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1232.99 308.25 107220.75 57747.18 158999.70 00:28:27.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1260.95 315.24 102913.80 43439.89 139494.44 00:28:27.314 ======================================================== 00:28:27.314 Total : 2493.94 623.48 105043.14 43439.89 158999.70 00:28:27.314 00:28:27.314 19:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:27.314 19:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.572 19:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:27.572 19:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:28:27.572 19:34:38 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3efc3413-16be-4ba6-a37d-ec75ed36908f 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3efc3413-16be-4ba6-a37d-ec75ed36908f 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=3efc3413-16be-4ba6-a37d-ec75ed36908f 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:30.854 { 00:28:30.854 "uuid": "3efc3413-16be-4ba6-a37d-ec75ed36908f", 00:28:30.854 "name": "lvs_0", 00:28:30.854 "base_bdev": "Nvme0n1", 00:28:30.854 "total_data_clusters": 238234, 00:28:30.854 "free_clusters": 238234, 00:28:30.854 "block_size": 512, 00:28:30.854 "cluster_size": 4194304 00:28:30.854 } 00:28:30.854 ]' 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3efc3413-16be-4ba6-a37d-ec75ed36908f") .free_clusters' 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3efc3413-16be-4ba6-a37d-ec75ed36908f") .cluster_size' 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:30.854 952936 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:30.854 19:34:41 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3efc3413-16be-4ba6-a37d-ec75ed36908f lbd_0 20480 00:28:31.419 19:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b8716687-ea3f-435c-8e8a-bcc4b3bedb6b 00:28:31.419 19:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b8716687-ea3f-435c-8e8a-bcc4b3bedb6b lvs_n_0 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5ead9acb-65b5-4bd6-81b9-799ef6766294 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5ead9acb-65b5-4bd6-81b9-799ef6766294 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5ead9acb-65b5-4bd6-81b9-799ef6766294 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:32.033 19:34:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:32.291 { 00:28:32.291 "uuid": "3efc3413-16be-4ba6-a37d-ec75ed36908f", 00:28:32.291 "name": "lvs_0", 00:28:32.291 "base_bdev": "Nvme0n1", 00:28:32.291 "total_data_clusters": 238234, 00:28:32.291 "free_clusters": 233114, 00:28:32.291 "block_size": 512, 00:28:32.291 "cluster_size": 4194304 00:28:32.291 }, 00:28:32.291 { 00:28:32.291 "uuid": "5ead9acb-65b5-4bd6-81b9-799ef6766294", 00:28:32.291 "name": "lvs_n_0", 00:28:32.291 "base_bdev": "b8716687-ea3f-435c-8e8a-bcc4b3bedb6b", 00:28:32.291 "total_data_clusters": 5114, 00:28:32.291 "free_clusters": 5114, 00:28:32.291 "block_size": 512, 00:28:32.291 "cluster_size": 4194304 00:28:32.291 } 00:28:32.291 ]' 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5ead9acb-65b5-4bd6-81b9-799ef6766294") .free_clusters' 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5ead9acb-65b5-4bd6-81b9-799ef6766294") .cluster_size' 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:32.291 20456 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:32.291 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5ead9acb-65b5-4bd6-81b9-799ef6766294 lbd_nest_0 20456 00:28:32.549 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ecb296c7-ff6a-4177-bec5-a9d57d5187fb 00:28:32.549 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.807 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:32.807 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ecb296c7-ff6a-4177-bec5-a9d57d5187fb 00:28:32.807 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.065 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:33.065 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:33.065 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:33.065 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:33.065 19:34:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.065 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.249 Initializing NVMe Controllers 00:28:45.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.249 Initialization complete. Launching workers. 00:28:45.249 ======================================================== 00:28:45.249 Latency(us) 00:28:45.249 Device Information : IOPS MiB/s Average min max 00:28:45.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.10 0.02 22231.84 167.35 48444.70 00:28:45.249 ======================================================== 00:28:45.249 Total : 45.10 0.02 22231.84 167.35 48444.70 00:28:45.249 00:28:45.249 19:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.249 19:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.249 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.228 Initializing NVMe Controllers 00:28:55.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.228 Initialization complete. Launching workers. 00:28:55.228 ======================================================== 00:28:55.228 Latency(us) 00:28:55.228 Device Information : IOPS MiB/s Average min max 00:28:55.228 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.50 9.81 12745.46 4990.06 47883.47 00:28:55.228 ======================================================== 00:28:55.228 Total : 78.50 9.81 12745.46 4990.06 47883.47 00:28:55.228 00:28:55.228 19:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:55.228 19:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:55.228 19:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.228 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.198 Initializing NVMe Controllers 00:29:05.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.198 Initialization complete. Launching workers. 00:29:05.198 ======================================================== 00:29:05.198 Latency(us) 00:29:05.198 Device Information : IOPS MiB/s Average min max 00:29:05.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8784.13 4.29 3650.42 252.25 43257.29 00:29:05.198 ======================================================== 00:29:05.198 Total : 8784.13 4.29 3650.42 252.25 43257.29 00:29:05.198 00:29:05.198 19:35:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:05.198 19:35:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.198 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.168 Initializing NVMe Controllers 00:29:15.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:15.168 Initialization complete. Launching workers. 00:29:15.168 ======================================================== 00:29:15.168 Latency(us) 00:29:15.168 Device Information : IOPS MiB/s Average min max 00:29:15.168 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2320.69 290.09 13789.76 867.41 30506.82 00:29:15.168 ======================================================== 00:29:15.168 Total : 2320.69 290.09 13789.76 867.41 30506.82 00:29:15.168 00:29:15.168 19:35:25 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:15.168 19:35:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:15.168 19:35:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:15.168 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.193 Initializing NVMe Controllers 00:29:25.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.193 Controller IO queue size 128, less than required. 00:29:25.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.193 Initialization complete. Launching workers. 00:29:25.193 ======================================================== 00:29:25.193 Latency(us) 00:29:25.193 Device Information : IOPS MiB/s Average min max 00:29:25.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15806.49 7.72 8098.13 1446.93 22619.36 00:29:25.193 ======================================================== 00:29:25.193 Total : 15806.49 7.72 8098.13 1446.93 22619.36 00:29:25.193 00:29:25.193 19:35:35 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:25.193 19:35:35 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.194 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.157 Initializing NVMe Controllers 00:29:35.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.157 Controller IO queue size 128, less than required. 00:29:35.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:35.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.157 Initialization complete. Launching workers. 00:29:35.157 ======================================================== 00:29:35.157 Latency(us) 00:29:35.157 Device Information : IOPS MiB/s Average min max 00:29:35.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1217.90 152.24 105561.13 23804.97 199076.08 00:29:35.157 ======================================================== 00:29:35.157 Total : 1217.90 152.24 105561.13 23804.97 199076.08 00:29:35.157 00:29:35.157 19:35:45 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.415 19:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ecb296c7-ff6a-4177-bec5-a9d57d5187fb 00:29:35.981 19:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:36.238 19:35:46 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8716687-ea3f-435c-8e8a-bcc4b3bedb6b 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.496 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.496 rmmod nvme_tcp 00:29:36.754 rmmod nvme_fabrics 00:29:36.754 rmmod nvme_keyring 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1756219 ']' 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1756219 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1756219 ']' 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1756219 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756219 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756219' 00:29:36.754 killing process with pid 1756219 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1756219 00:29:36.754 19:35:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1756219 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.127 19:35:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.661 19:35:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:40.661 00:29:40.661 real 1m32.202s 00:29:40.661 user 5m32.959s 00:29:40.661 sys 0m14.487s 00:29:40.661 19:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:40.661 19:35:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.661 ************************************ 00:29:40.661 END TEST nvmf_perf 00:29:40.661 ************************************ 00:29:40.661 19:35:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:40.661 19:35:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:40.661 19:35:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:40.661 19:35:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.661 19:35:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.661 ************************************ 00:29:40.661 START TEST nvmf_fio_host 00:29:40.661 ************************************ 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:40.661 * Looking for test storage... 00:29:40.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.661 19:35:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:45.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:45.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:45.926 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:45.927 Found net devices under 0000:86:00.0: cvl_0_0 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:45.927 Found net devices under 0000:86:00.1: cvl_0_1 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:45.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:45.927 00:29:45.927 --- 10.0.0.2 ping statistics --- 00:29:45.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.927 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:29:45.927 00:29:45.927 --- 10.0.0.1 ping statistics --- 00:29:45.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.927 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1773268 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1773268 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1773268 ']' 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:45.927 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.927 [2024-07-15 19:35:56.607545] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:29:45.927 [2024-07-15 19:35:56.607587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.927 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.927 [2024-07-15 19:35:56.638126] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:45.927 [2024-07-15 19:35:56.666645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.927 [2024-07-15 19:35:56.708070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.927 [2024-07-15 19:35:56.708108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.927 [2024-07-15 19:35:56.708115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.927 [2024-07-15 19:35:56.708122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.927 [2024-07-15 19:35:56.708126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.927 [2024-07-15 19:35:56.708176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.927 [2024-07-15 19:35:56.708276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.927 [2024-07-15 19:35:56.708299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.927 [2024-07-15 19:35:56.708300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:46.185 [2024-07-15 19:35:56.964693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:46.185 19:35:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.185 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:46.443 Malloc1 00:29:46.443 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.701 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:46.959 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.959 [2024-07-15 19:35:57.762668] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.959 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:47.217 19:35:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:47.217 19:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:47.217 19:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:47.217 19:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:47.217 19:35:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:47.538 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:47.538 fio-3.35 00:29:47.538 Starting 1 thread 00:29:47.538 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.062 00:29:50.062 test: (groupid=0, jobs=1): err= 0: pid=1773643: Mon Jul 15 19:36:00 2024 00:29:50.062 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2005msec) 00:29:50.062 slat (nsec): min=1607, max=166948, avg=1740.31, stdev=1588.88 00:29:50.062 clat (usec): min=2620, max=10499, avg=6045.60, stdev=451.20 00:29:50.062 lat (usec): min=2644, max=10501, avg=6047.34, stdev=451.10 00:29:50.062 clat percentiles (usec): 00:29:50.062 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:29:50.062 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:29:50.062 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:29:50.062 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9634], 00:29:50.062 | 99.99th=[ 9765] 00:29:50.062 bw ( KiB/s): min=45536, max=47520, per=99.95%, avg=46750.00, stdev=882.38, samples=4 00:29:50.062 iops : min=11384, max=11880, avg=11687.50, stdev=220.60, samples=4 00:29:50.062 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec); 0 zone resets 00:29:50.062 slat (nsec): min=1658, max=168347, avg=1833.20, stdev=1244.01 00:29:50.062 clat (usec): min=1698, max=9657, avg=4865.23, stdev=387.74 00:29:50.062 lat (usec): min=1708, max=9659, avg=4867.06, stdev=387.67 00:29:50.062 clat percentiles (usec): 00:29:50.062 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:29:50.062 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:29:50.062 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:29:50.062 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7898], 99.95th=[ 8979], 00:29:50.062 | 99.99th=[ 9634] 00:29:50.062 bw ( KiB/s): min=45896, max=46848, per=100.00%, avg=46432.00, stdev=436.47, samples=4 00:29:50.062 iops : min=11474, max=11712, avg=11608.00, stdev=109.12, samples=4 00:29:50.062 lat (msec) : 2=0.02%, 4=0.55%, 10=99.43%, 20=0.01% 00:29:50.062 cpu : usr=70.01%, sys=26.35%, ctx=107, majf=0, minf=6 00:29:50.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:50.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:50.062 issued rwts: total=23445,23273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:50.062 00:29:50.062 Run status group 0 (all jobs): 00:29:50.062 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.0MB), run=2005-2005msec 00:29:50.062 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:29:50.062 19:36:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:50.062 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:50.062 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:50.063 19:36:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:50.319 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:50.319 fio-3.35 00:29:50.320 Starting 1 thread 00:29:50.320 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.844 00:29:52.844 test: (groupid=0, jobs=1): err= 0: pid=1774285: Mon Jul 15 19:36:03 2024 00:29:52.844 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(333MiB/2007msec) 00:29:52.844 slat (nsec): min=2616, max=91206, avg=2847.83, stdev=1185.19 00:29:52.844 clat (usec): min=2069, max=14406, avg=7123.59, stdev=1669.31 00:29:52.844 lat (usec): min=2071, max=14409, avg=7126.44, stdev=1669.37 00:29:52.844 clat percentiles (usec): 00:29:52.844 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5669], 00:29:52.844 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7504], 00:29:52.844 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10028], 00:29:52.844 | 99.00th=[11994], 99.50th=[12518], 99.90th=[13435], 99.95th=[13566], 00:29:52.844 | 99.99th=[13698] 00:29:52.844 bw ( KiB/s): min=79488, max=93120, per=50.19%, avg=85208.00, stdev=5782.20, samples=4 00:29:52.844 iops : min= 4968, max= 5820, avg=5325.50, stdev=361.39, samples=4 00:29:52.844 write: IOPS=6145, BW=96.0MiB/s (101MB/s)(174MiB/1817msec); 0 zone resets 00:29:52.844 slat (usec): min=30, max=229, avg=31.88, stdev= 3.99 00:29:52.844 clat (usec): min=3760, max=15829, avg=8621.23, stdev=1499.69 00:29:52.844 lat (usec): min=3791, max=15860, avg=8653.11, stdev=1499.79 00:29:52.844 clat percentiles (usec): 00:29:52.844 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7373], 00:29:52.844 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:29:52.844 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:29:52.844 | 99.00th=[12649], 99.50th=[13304], 99.90th=[14877], 99.95th=[15401], 00:29:52.844 | 99.99th=[15795] 00:29:52.844 bw ( KiB/s): min=83232, max=96352, per=90.25%, avg=88744.00, stdev=5493.10, samples=4 00:29:52.844 iops : min= 5202, max= 6022, avg=5546.50, stdev=343.32, samples=4 00:29:52.844 lat (msec) : 4=0.88%, 10=89.91%, 20=9.22% 00:29:52.844 cpu : usr=85.04%, sys=13.61%, ctx=26, majf=0, minf=3 00:29:52.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:29:52.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:52.844 issued rwts: total=21295,11167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:52.844 00:29:52.844 Run status group 0 (all jobs): 00:29:52.844 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=333MiB (349MB), run=2007-2007msec 00:29:52.844 WRITE: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=174MiB (183MB), run=1817-1817msec 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:29:52.844 19:36:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:29:56.124 Nvme0n1 00:29:56.124 19:36:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f31d09d9-0896-4ecd-b38d-001ccbb7ead0 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f31d09d9-0896-4ecd-b38d-001ccbb7ead0 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f31d09d9-0896-4ecd-b38d-001ccbb7ead0 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:58.653 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:58.910 { 00:29:58.910 "uuid": "f31d09d9-0896-4ecd-b38d-001ccbb7ead0", 00:29:58.910 "name": "lvs_0", 00:29:58.910 "base_bdev": "Nvme0n1", 00:29:58.910 "total_data_clusters": 930, 00:29:58.910 "free_clusters": 930, 00:29:58.910 "block_size": 512, 00:29:58.910 "cluster_size": 1073741824 00:29:58.910 } 00:29:58.910 ]' 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f31d09d9-0896-4ecd-b38d-001ccbb7ead0") .free_clusters' 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f31d09d9-0896-4ecd-b38d-001ccbb7ead0") .cluster_size' 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:58.910 952320 00:29:58.910 19:36:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:59.167 c33e4a1d-20ad-44a1-aa68-e5ddbd78ff52 00:29:59.424 19:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:59.424 19:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:59.681 19:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:59.939 19:36:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:00.197 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:00.197 fio-3.35 00:30:00.197 Starting 1 thread 00:30:00.197 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.724 00:30:02.724 test: (groupid=0, jobs=1): err= 0: pid=1776478: Mon Jul 15 19:36:13 2024 00:30:02.724 read: IOPS=7978, BW=31.2MiB/s (32.7MB/s)(62.5MiB/2006msec) 00:30:02.724 slat (nsec): min=1612, max=104989, avg=1714.16, stdev=1090.80 00:30:02.724 clat (usec): min=788, max=170134, avg=8829.79, stdev=10319.77 00:30:02.724 lat (usec): min=789, max=170153, avg=8831.51, stdev=10319.93 00:30:02.724 clat percentiles (msec): 00:30:02.724 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:02.724 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:02.724 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:30:02.724 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 171], 00:30:02.724 | 99.99th=[ 171] 00:30:02.724 bw ( KiB/s): min=22491, max=35088, per=99.82%, avg=31856.75, stdev=6245.75, samples=4 00:30:02.724 iops : min= 5622, max= 8772, avg=7964.00, stdev=1561.81, samples=4 00:30:02.724 write: IOPS=7958, BW=31.1MiB/s (32.6MB/s)(62.4MiB/2006msec); 0 zone resets 00:30:02.724 slat (nsec): min=1666, max=85587, avg=1788.73, stdev=742.22 00:30:02.724 clat (usec): min=227, max=168599, avg=7111.55, stdev=9649.16 00:30:02.724 lat (usec): min=229, max=168604, avg=7113.34, stdev=9649.34 00:30:02.724 clat percentiles (msec): 00:30:02.724 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:30:02.724 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:30:02.724 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:30:02.724 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:30:02.724 | 99.99th=[ 169] 00:30:02.724 bw ( KiB/s): min=23417, max=34688, per=99.91%, avg=31806.25, stdev=5593.40, samples=4 00:30:02.724 iops : min= 5854, max= 8672, avg=7951.50, stdev=1398.48, samples=4 00:30:02.724 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:30:02.724 lat (msec) : 2=0.04%, 4=0.24%, 10=99.09%, 20=0.20%, 250=0.40% 00:30:02.724 cpu : usr=70.72%, sys=26.83%, ctx=118, majf=0, minf=6 00:30:02.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:02.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:02.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:02.724 issued rwts: total=16005,15965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:02.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:02.724 00:30:02.724 Run status group 0 (all jobs): 00:30:02.724 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.5MiB (65.6MB), run=2006-2006msec 00:30:02.724 WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.4MiB (65.4MB), run=2006-2006msec 00:30:02.724 19:36:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:02.724 19:36:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:03.657 19:36:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8123dc8e-7ded-459b-a210-bd57cc541f66 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8123dc8e-7ded-459b-a210-bd57cc541f66 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8123dc8e-7ded-459b-a210-bd57cc541f66 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:03.658 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:03.915 { 00:30:03.915 "uuid": "f31d09d9-0896-4ecd-b38d-001ccbb7ead0", 00:30:03.915 "name": "lvs_0", 00:30:03.915 "base_bdev": "Nvme0n1", 00:30:03.915 "total_data_clusters": 930, 00:30:03.915 "free_clusters": 0, 00:30:03.915 "block_size": 512, 00:30:03.915 "cluster_size": 1073741824 00:30:03.915 }, 00:30:03.915 { 00:30:03.915 "uuid": "8123dc8e-7ded-459b-a210-bd57cc541f66", 00:30:03.915 "name": "lvs_n_0", 00:30:03.915 "base_bdev": "c33e4a1d-20ad-44a1-aa68-e5ddbd78ff52", 00:30:03.915 "total_data_clusters": 237847, 00:30:03.915 "free_clusters": 237847, 00:30:03.915 "block_size": 512, 00:30:03.915 "cluster_size": 4194304 00:30:03.915 } 00:30:03.915 ]' 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8123dc8e-7ded-459b-a210-bd57cc541f66") .free_clusters' 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8123dc8e-7ded-459b-a210-bd57cc541f66") .cluster_size' 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:03.915 951388 00:30:03.915 19:36:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:04.482 81a10e43-904f-4bb4-b07a-70c5ea033f95 00:30:04.482 19:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:04.740 19:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:04.998 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:05.264 19:36:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:05.521 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:05.521 fio-3.35 00:30:05.521 Starting 1 thread 00:30:05.521 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.052 00:30:08.052 test: (groupid=0, jobs=1): err= 0: pid=1777491: Mon Jul 15 19:36:18 2024 00:30:08.052 read: IOPS=7739, BW=30.2MiB/s (31.7MB/s)(60.7MiB/2007msec) 00:30:08.052 slat (nsec): min=1594, max=102980, avg=1713.69, stdev=1140.07 00:30:08.052 clat (usec): min=3139, max=15346, avg=9129.80, stdev=765.08 00:30:08.052 lat (usec): min=3143, max=15348, avg=9131.51, stdev=765.02 00:30:08.052 clat percentiles (usec): 00:30:08.052 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 00:30:08.052 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:30:08.052 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:30:08.052 | 99.00th=[10814], 99.50th=[10945], 99.90th=[13304], 99.95th=[15139], 00:30:08.052 | 99.99th=[15270] 00:30:08.052 bw ( KiB/s): min=29520, max=31576, per=99.87%, avg=30918.00, stdev=948.36, samples=4 00:30:08.052 iops : min= 7380, max= 7894, avg=7729.50, stdev=237.09, samples=4 00:30:08.052 write: IOPS=7727, BW=30.2MiB/s (31.7MB/s)(60.6MiB/2007msec); 0 zone resets 00:30:08.052 slat (nsec): min=1635, max=90057, avg=1791.41, stdev=796.92 00:30:08.052 clat (usec): min=1471, max=14369, avg=7297.19, stdev=652.24 00:30:08.052 lat (usec): min=1475, max=14371, avg=7298.98, stdev=652.21 00:30:08.052 clat percentiles (usec): 00:30:08.052 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:30:08.052 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:30:08.052 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:30:08.052 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11338], 99.95th=[12256], 00:30:08.052 | 99.99th=[14353] 00:30:08.052 bw ( KiB/s): min=30736, max=31232, per=100.00%, avg=30916.00, stdev=217.18, samples=4 00:30:08.052 iops : min= 7684, max= 7808, avg=7729.00, stdev=54.30, samples=4 00:30:08.052 lat (msec) : 2=0.01%, 4=0.09%, 10=94.21%, 20=5.69% 00:30:08.052 cpu : usr=68.44%, sys=28.96%, ctx=108, majf=0, minf=6 00:30:08.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:08.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:08.052 issued rwts: total=15534,15510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:08.052 00:30:08.052 Run status group 0 (all jobs): 00:30:08.052 READ: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=60.7MiB (63.6MB), run=2007-2007msec 00:30:08.052 WRITE: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=60.6MiB (63.5MB), run=2007-2007msec 00:30:08.052 19:36:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:08.052 19:36:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:08.052 19:36:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:12.234 19:36:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:12.235 19:36:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:14.754 19:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:14.754 19:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:16.646 19:36:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:16.646 19:36:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:16.646 19:36:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:16.647 rmmod nvme_tcp 00:30:16.647 rmmod nvme_fabrics 00:30:16.647 rmmod nvme_keyring 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1773268 ']' 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1773268 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1773268 ']' 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1773268 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1773268 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1773268' 00:30:16.647 killing process with pid 1773268 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1773268 00:30:16.647 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1773268 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.905 19:36:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.805 19:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:18.805 00:30:18.805 real 0m38.542s 00:30:18.805 user 2m35.203s 00:30:18.806 sys 0m8.331s 00:30:18.806 19:36:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.806 19:36:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.806 ************************************ 00:30:18.806 END TEST nvmf_fio_host 00:30:18.806 ************************************ 00:30:19.063 19:36:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:19.063 19:36:29 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:19.063 19:36:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:19.063 19:36:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.063 19:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.063 ************************************ 00:30:19.063 START TEST nvmf_failover 00:30:19.063 ************************************ 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:19.063 * Looking for test storage... 00:30:19.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:19.063 19:36:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.340 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:24.341 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:24.341 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:24.341 Found net devices under 0000:86:00.0: cvl_0_0 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:24.341 Found net devices under 0000:86:00.1: cvl_0_1 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.341 19:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.341 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:24.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:30:24.603 00:30:24.603 --- 10.0.0.2 ping statistics --- 00:30:24.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.603 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:30:24.603 00:30:24.603 --- 10.0.0.1 ping statistics --- 00:30:24.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.603 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:24.603 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1782626 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1782626 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1782626 ']' 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:24.604 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.604 [2024-07-15 19:36:35.281143] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:30:24.604 [2024-07-15 19:36:35.281186] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.604 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.604 [2024-07-15 19:36:35.310475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:24.604 [2024-07-15 19:36:35.337205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:24.604 [2024-07-15 19:36:35.377941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.604 [2024-07-15 19:36:35.377980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.604 [2024-07-15 19:36:35.377987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.604 [2024-07-15 19:36:35.377993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.604 [2024-07-15 19:36:35.377998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.604 [2024-07-15 19:36:35.378096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.604 [2024-07-15 19:36:35.378185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.604 [2024-07-15 19:36:35.378186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:24.861 [2024-07-15 19:36:35.655103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.861 19:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:25.118 Malloc0 00:30:25.118 19:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.375 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:25.631 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.631 [2024-07-15 19:36:36.434926] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.631 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:25.887 [2024-07-15 19:36:36.607406] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:25.887 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.144 [2024-07-15 19:36:36.779984] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1782889 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1782889 /var/tmp/bdevperf.sock 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1782889 ']' 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.144 19:36:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:26.400 19:36:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:26.400 19:36:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:26.400 19:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.401 NVMe0n1 00:30:26.657 19:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.914 00:30:26.914 19:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1783099 00:30:26.914 19:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.914 19:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:27.844 19:36:38 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.100 [2024-07-15 19:36:38.792720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.100 [2024-07-15 19:36:38.792806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.792999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.101 [2024-07-15 19:36:38.793258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 [2024-07-15 19:36:38.793295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5cd30 is same with the state(5) to be set 00:30:28.102 19:36:38 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:31.429 19:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.429 00:30:31.429 19:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:31.429 [2024-07-15 19:36:42.259177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.429 [2024-07-15 19:36:42.259262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e2b0 is same with the state(5) to be set 00:30:31.685 19:36:42 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:34.958 19:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.958 [2024-07-15 19:36:45.454791] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.958 19:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:35.888 19:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:35.888 [2024-07-15 19:36:46.652244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 [2024-07-15 19:36:46.652316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e990 is same with the state(5) to be set 00:30:35.888 19:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1783099 00:30:42.464 0 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1782889 ']' 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1782889' 00:30:42.464 killing process with pid 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1782889 00:30:42.464 19:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:42.464 [2024-07-15 19:36:36.838251] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:30:42.464 [2024-07-15 19:36:36.838306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1782889 ] 00:30:42.464 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.464 [2024-07-15 19:36:36.864660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:42.464 [2024-07-15 19:36:36.893491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.464 [2024-07-15 19:36:36.934346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.464 Running I/O for 15 seconds... 00:30:42.464 [2024-07-15 19:36:38.794799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.794981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.794989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.464 [2024-07-15 19:36:38.795138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.464 [2024-07-15 19:36:38.795147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.465 [2024-07-15 19:36:38.795707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.465 [2024-07-15 19:36:38.795778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.465 [2024-07-15 19:36:38.795785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.466 [2024-07-15 19:36:38.795799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.466 [2024-07-15 19:36:38.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.795991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.795997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.466 [2024-07-15 19:36:38.796379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.466 [2024-07-15 19:36:38.796386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.467 [2024-07-15 19:36:38.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.796880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.796885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.796891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.796898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.807655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.807667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.807674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.807682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.807690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.467 [2024-07-15 19:36:38.807701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.467 [2024-07-15 19:36:38.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:30:42.467 [2024-07-15 19:36:38.807714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.807769] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ae2a0 was disconnected and freed. reset controller. 00:30:42.467 [2024-07-15 19:36:38.807782] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:42.467 [2024-07-15 19:36:38.807812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.467 [2024-07-15 19:36:38.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.807833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.467 [2024-07-15 19:36:38.807843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.467 [2024-07-15 19:36:38.807853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:38.807863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:38.807874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:38.807884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:38.807893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:42.468 [2024-07-15 19:36:38.807926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1787dd0 (9): Bad file descriptor 00:30:42.468 [2024-07-15 19:36:38.812271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:42.468 [2024-07-15 19:36:38.888034] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:42.468 [2024-07-15 19:36:42.259686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:42.259722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:42.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:42.259754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.468 [2024-07-15 19:36:42.259771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1787dd0 is same with the state(5) to be set 00:30:42.468 [2024-07-15 19:36:42.259832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.468 [2024-07-15 19:36:42.259928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.259943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.259957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.259973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.259988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.259996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.468 [2024-07-15 19:36:42.260251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.468 [2024-07-15 19:36:42.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.469 [2024-07-15 19:36:42.260312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.469 [2024-07-15 19:36:42.260327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.469 [2024-07-15 19:36:42.260829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.469 [2024-07-15 19:36:42.260835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.260990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.260996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.470 [2024-07-15 19:36:42.261484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.470 [2024-07-15 19:36:42.261493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:42.261677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:42.261767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.471 [2024-07-15 19:36:42.261795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.471 [2024-07-15 19:36:42.261801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25544 len:8 PRP1 0x0 PRP2 0x0 00:30:42.471 [2024-07-15 19:36:42.261807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:42.261848] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1952c50 was disconnected and freed. reset controller. 00:30:42.471 [2024-07-15 19:36:42.261858] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:42.471 [2024-07-15 19:36:42.261865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:42.471 [2024-07-15 19:36:42.264986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:42.471 [2024-07-15 19:36:42.265016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1787dd0 (9): Bad file descriptor 00:30:42.471 [2024-07-15 19:36:42.342372] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:42.471 [2024-07-15 19:36:46.654112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.471 [2024-07-15 19:36:46.654424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:46.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:46.654457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:46.654472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.471 [2024-07-15 19:36:46.654480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.471 [2024-07-15 19:36:46.654486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.472 [2024-07-15 19:36:46.654802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.472 [2024-07-15 19:36:46.654808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.473 [2024-07-15 19:36:46.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.473 [2024-07-15 19:36:46.654928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.654937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.654944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.654955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.654984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.654993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.655012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.655030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.655049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.655068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.474 [2024-07-15 19:36:46.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.474 [2024-07-15 19:36:46.655089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.475 [2024-07-15 19:36:46.655100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.475 [2024-07-15 19:36:46.655108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.476 [2024-07-15 19:36:46.655125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.476 [2024-07-15 19:36:46.655140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.476 [2024-07-15 19:36:46.655154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.476 [2024-07-15 19:36:46.655169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:42.476 [2024-07-15 19:36:46.655186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.476 [2024-07-15 19:36:46.655219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42904 len:8 PRP1 0x0 PRP2 0x0 00:30:42.476 [2024-07-15 19:36:46.655232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.476 [2024-07-15 19:36:46.655253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.476 [2024-07-15 19:36:46.655258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42912 len:8 PRP1 0x0 PRP2 0x0 00:30:42.476 [2024-07-15 19:36:46.655265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 19:36:46.655272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.476 [2024-07-15 19:36:46.655278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:30:42.477 [2024-07-15 19:36:46.655291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.477 [2024-07-15 19:36:46.655298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.477 [2024-07-15 19:36:46.655303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:30:42.477 [2024-07-15 19:36:46.655318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.477 [2024-07-15 19:36:46.655326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.477 [2024-07-15 19:36:46.655333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42936 len:8 PRP1 0x0 PRP2 0x0 00:30:42.477 [2024-07-15 19:36:46.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.477 [2024-07-15 19:36:46.655355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.477 [2024-07-15 19:36:46.655360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:30:42.477 [2024-07-15 19:36:46.655372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.477 [2024-07-15 19:36:46.655379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.477 [2024-07-15 19:36:46.655386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:30:42.477 [2024-07-15 19:36:46.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.477 [2024-07-15 19:36:46.655405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.477 [2024-07-15 19:36:46.655410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.477 [2024-07-15 19:36:46.655415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:30:42.478 [2024-07-15 19:36:46.655421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.478 [2024-07-15 19:36:46.655428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.478 [2024-07-15 19:36:46.655433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.478 [2024-07-15 19:36:46.655439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:30:42.478 [2024-07-15 19:36:46.655448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.478 [2024-07-15 19:36:46.655457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.478 [2024-07-15 19:36:46.655464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.478 [2024-07-15 19:36:46.655469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:30:42.478 [2024-07-15 19:36:46.655476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.478 [2024-07-15 19:36:46.655483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.478 [2024-07-15 19:36:46.655488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.478 [2024-07-15 19:36:46.655495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43000 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43008 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43016 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:30:42.479 [2024-07-15 19:36:46.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.479 [2024-07-15 19:36:46.655648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.479 [2024-07-15 19:36:46.655654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.479 [2024-07-15 19:36:46.655660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43032 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43040 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43048 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43056 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43064 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43072 len:8 PRP1 0x0 PRP2 0x0 00:30:42.480 [2024-07-15 19:36:46.655797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.480 [2024-07-15 19:36:46.655804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.480 [2024-07-15 19:36:46.655810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.480 [2024-07-15 19:36:46.655816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:30:42.481 [2024-07-15 19:36:46.655822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.481 [2024-07-15 19:36:46.655828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.481 [2024-07-15 19:36:46.655833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.481 [2024-07-15 19:36:46.655841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:30:42.481 [2024-07-15 19:36:46.655848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.481 [2024-07-15 19:36:46.655855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.482 [2024-07-15 19:36:46.655863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.482 [2024-07-15 19:36:46.655870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:30:42.482 [2024-07-15 19:36:46.655878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.482 [2024-07-15 19:36:46.655885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.482 [2024-07-15 19:36:46.655890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.482 [2024-07-15 19:36:46.655896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:30:42.482 [2024-07-15 19:36:46.655902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.482 [2024-07-15 19:36:46.655909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.482 [2024-07-15 19:36:46.655915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.482 [2024-07-15 19:36:46.655921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:30:42.482 [2024-07-15 19:36:46.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.482 [2024-07-15 19:36:46.655934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.482 [2024-07-15 19:36:46.655938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.482 [2024-07-15 19:36:46.655944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:30:42.482 [2024-07-15 19:36:46.655951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.482 [2024-07-15 19:36:46.655959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.482 [2024-07-15 19:36:46.655965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.482 [2024-07-15 19:36:46.655971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:30:42.482 [2024-07-15 19:36:46.655977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.483 [2024-07-15 19:36:46.655983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.483 [2024-07-15 19:36:46.655989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.483 [2024-07-15 19:36:46.655994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:30:42.483 [2024-07-15 19:36:46.656001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.483 [2024-07-15 19:36:46.656007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.483 [2024-07-15 19:36:46.656012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.483 [2024-07-15 19:36:46.656019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:30:42.483 [2024-07-15 19:36:46.656025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.483 [2024-07-15 19:36:46.656032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.483 [2024-07-15 19:36:46.656037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.483 [2024-07-15 19:36:46.656042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:30:42.483 [2024-07-15 19:36:46.656049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.483 [2024-07-15 19:36:46.656055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.484 [2024-07-15 19:36:46.656060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.484 [2024-07-15 19:36:46.656065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:30:42.484 [2024-07-15 19:36:46.656073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.484 [2024-07-15 19:36:46.656080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.484 [2024-07-15 19:36:46.656085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.484 [2024-07-15 19:36:46.656091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:30:42.484 [2024-07-15 19:36:46.656097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.484 [2024-07-15 19:36:46.656104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.484 [2024-07-15 19:36:46.656108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.484 [2024-07-15 19:36:46.656114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:30:42.484 [2024-07-15 19:36:46.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.484 [2024-07-15 19:36:46.656127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.485 [2024-07-15 19:36:46.656133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.485 [2024-07-15 19:36:46.656140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:30:42.485 [2024-07-15 19:36:46.656146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.485 [2024-07-15 19:36:46.656153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.485 [2024-07-15 19:36:46.656158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.485 [2024-07-15 19:36:46.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:30:42.485 [2024-07-15 19:36:46.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.485 [2024-07-15 19:36:46.656175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.485 [2024-07-15 19:36:46.656181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.485 [2024-07-15 19:36:46.656186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:30:42.485 [2024-07-15 19:36:46.656193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.485 [2024-07-15 19:36:46.656199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.485 [2024-07-15 19:36:46.656204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.485 [2024-07-15 19:36:46.656210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43208 len:8 PRP1 0x0 PRP2 0x0 00:30:42.485 [2024-07-15 19:36:46.656216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.485 [2024-07-15 19:36:46.656222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.485 [2024-07-15 19:36:46.656231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.485 [2024-07-15 19:36:46.656237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43216 len:8 PRP1 0x0 PRP2 0x0 00:30:42.485 [2024-07-15 19:36:46.656243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.485 [2024-07-15 19:36:46.656250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:30:42.486 [2024-07-15 19:36:46.656396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.486 [2024-07-15 19:36:46.656402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.486 [2024-07-15 19:36:46.656408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.486 [2024-07-15 19:36:46.656414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.656420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.656428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:42.487 [2024-07-15 19:36:46.666856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:42.487 [2024-07-15 19:36:46.666863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:30:42.487 [2024-07-15 19:36:46.666873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666920] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19529f0 was disconnected and freed. reset controller. 00:30:42.487 [2024-07-15 19:36:46.666932] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:42.487 [2024-07-15 19:36:46.666959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.487 [2024-07-15 19:36:46.666970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.666980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.487 [2024-07-15 19:36:46.666989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.667000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.487 [2024-07-15 19:36:46.667009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.667021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.487 [2024-07-15 19:36:46.667030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.487 [2024-07-15 19:36:46.667039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:42.487 [2024-07-15 19:36:46.667078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1787dd0 (9): Bad file descriptor 00:30:42.487 [2024-07-15 19:36:46.671371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:42.487 [2024-07-15 19:36:46.707574] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:42.487 00:30:42.487 Latency(us) 00:30:42.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.487 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:42.487 Verification LBA range: start 0x0 length 0x4000 00:30:42.487 NVMe0n1 : 15.01 10885.79 42.52 555.70 0.00 11164.81 633.99 21313.45 00:30:42.487 =================================================================================================================== 00:30:42.487 Total : 10885.79 42.52 555.70 0.00 11164.81 633.99 21313.45 00:30:42.487 Received shutdown signal, test time was about 15.000000 seconds 00:30:42.487 00:30:42.487 Latency(us) 00:30:42.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.487 =================================================================================================================== 00:30:42.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1785423 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1785423 /var/tmp/bdevperf.sock 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1785423 ']' 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:42.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:42.487 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:42.749 [2024-07-15 19:36:53.400710] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:42.749 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:42.749 [2024-07-15 19:36:53.585242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:43.006 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:43.264 NVMe0n1 00:30:43.264 19:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:43.264 00:30:43.520 19:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:43.778 00:30:43.778 19:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:43.778 19:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:44.036 19:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.036 19:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:47.312 19:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.312 19:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:47.312 19:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:47.312 19:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1786336 00:30:47.312 19:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1786336 00:30:48.684 0 00:30:48.684 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:48.684 [2024-07-15 19:36:53.052968] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:30:48.684 [2024-07-15 19:36:53.053026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785423 ] 00:30:48.684 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.684 [2024-07-15 19:36:53.079693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:48.684 [2024-07-15 19:36:53.108566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.684 [2024-07-15 19:36:53.145286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.684 [2024-07-15 19:36:54.852277] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:48.684 [2024-07-15 19:36:54.852324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.684 [2024-07-15 19:36:54.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.684 [2024-07-15 19:36:54.852344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.684 [2024-07-15 19:36:54.852351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.684 [2024-07-15 19:36:54.852358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.684 [2024-07-15 19:36:54.852364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.684 [2024-07-15 19:36:54.852372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.684 [2024-07-15 19:36:54.852378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.684 [2024-07-15 19:36:54.852385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.684 [2024-07-15 19:36:54.852410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.684 [2024-07-15 19:36:54.852423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a86dd0 (9): Bad file descriptor 00:30:48.684 [2024-07-15 19:36:54.855660] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.684 Running I/O for 1 seconds... 00:30:48.684 00:30:48.684 Latency(us) 00:30:48.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.685 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:48.685 Verification LBA range: start 0x0 length 0x4000 00:30:48.685 NVMe0n1 : 1.05 10583.07 41.34 0.00 0.00 11670.91 2322.25 45590.26 00:30:48.685 =================================================================================================================== 00:30:48.685 Total : 10583.07 41.34 0.00 0.00 11670.91 2322.25 45590.26 00:30:48.685 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:48.685 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:48.685 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.942 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:48.942 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:48.942 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.200 19:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:52.475 19:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.475 19:37:02 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1785423 ']' 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785423' 00:30:52.475 killing process with pid 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1785423 00:30:52.475 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:52.731 rmmod nvme_tcp 00:30:52.731 rmmod nvme_fabrics 00:30:52.731 rmmod nvme_keyring 00:30:52.731 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1782626 ']' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1782626 ']' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1782626' 00:30:52.989 killing process with pid 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1782626 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.989 19:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.585 19:37:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.585 00:30:55.585 real 0m36.171s 00:30:55.585 user 1m55.644s 00:30:55.585 sys 0m7.179s 00:30:55.585 19:37:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:55.585 19:37:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:55.586 ************************************ 00:30:55.586 END TEST nvmf_failover 00:30:55.586 ************************************ 00:30:55.586 19:37:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:55.586 19:37:05 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:55.586 19:37:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:55.586 19:37:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.586 19:37:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.586 ************************************ 00:30:55.586 START TEST nvmf_host_discovery 00:30:55.586 ************************************ 00:30:55.586 19:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:55.586 * Looking for test storage... 00:30:55.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:55.586 19:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:00.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:00.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:00.844 Found net devices under 0000:86:00.0: cvl_0_0 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.844 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:00.845 Found net devices under 0000:86:00.1: cvl_0_1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:31:00.845 00:31:00.845 --- 10.0.0.2 ping statistics --- 00:31:00.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.845 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:31:00.845 00:31:00.845 --- 10.0.0.1 ping statistics --- 00:31:00.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.845 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1790559 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1790559 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1790559 ']' 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 [2024-07-15 19:37:11.450451] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:00.845 [2024-07-15 19:37:11.450490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.845 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.845 [2024-07-15 19:37:11.480063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:00.845 [2024-07-15 19:37:11.508781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.845 [2024-07-15 19:37:11.548639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.845 [2024-07-15 19:37:11.548679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.845 [2024-07-15 19:37:11.548685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.845 [2024-07-15 19:37:11.548692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.845 [2024-07-15 19:37:11.548697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.845 [2024-07-15 19:37:11.548716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 [2024-07-15 19:37:11.672665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 [2024-07-15 19:37:11.680825] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.845 null0 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.845 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.104 null1 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1790601 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1790601 /tmp/host.sock 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1790601 ']' 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:01.104 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.104 [2024-07-15 19:37:11.757535] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:01.104 [2024-07-15 19:37:11.757577] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790601 ] 00:31:01.104 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.104 [2024-07-15 19:37:11.783950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:01.104 [2024-07-15 19:37:11.811647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.104 [2024-07-15 19:37:11.853200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:01.104 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.363 19:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.363 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.621 [2024-07-15 19:37:12.274351] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.621 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:01.622 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.880 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:01.880 19:37:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:02.447 [2024-07-15 19:37:12.999387] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:02.447 [2024-07-15 19:37:12.999407] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:02.447 [2024-07-15 19:37:12.999425] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:02.447 [2024-07-15 19:37:13.085685] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:02.447 [2024-07-15 19:37:13.263891] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:02.447 [2024-07-15 19:37:13.263912] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.705 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.963 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 [2024-07-15 19:37:13.762390] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:02.964 [2024-07-15 19:37:13.763515] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:02.964 [2024-07-15 19:37:13.763538] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:02.964 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:03.223 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.224 [2024-07-15 19:37:13.890232] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:03.224 19:37:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:03.482 [2024-07-15 19:37:14.198557] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:03.482 [2024-07-15 19:37:14.198577] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:03.482 [2024-07-15 19:37:14.198582] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.420 19:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.420 [2024-07-15 19:37:15.025953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.420 [2024-07-15 19:37:15.025979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.420 [2024-07-15 19:37:15.025988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.420 [2024-07-15 19:37:15.025996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.420 [2024-07-15 19:37:15.026003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.420 [2024-07-15 19:37:15.026010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.420 [2024-07-15 19:37:15.026017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.420 [2024-07-15 19:37:15.026024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.420 [2024-07-15 19:37:15.026031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.026868] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:04.420 [2024-07-15 19:37:15.026881] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.420 [2024-07-15 19:37:15.035942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.420 [2024-07-15 19:37:15.045979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.046275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.046291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.046299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.046311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.046322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.046329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.046337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.046348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 [2024-07-15 19:37:15.056032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.056221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.056239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.056246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.056256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.056266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.056272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.056278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.056287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 [2024-07-15 19:37:15.066082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.066374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.066387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.066395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.066405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.066415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.066422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.066429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.066438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 [2024-07-15 19:37:15.076132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.076402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.076416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.076424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.076435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.076445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.076452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.076458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.076467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.420 [2024-07-15 19:37:15.086186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.086478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.086491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.086499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.086509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.086519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.086525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.086532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.086541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 [2024-07-15 19:37:15.096241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.096466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.096482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.096493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.096504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.096514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.096521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.096527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.096537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.420 [2024-07-15 19:37:15.106296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.420 [2024-07-15 19:37:15.106572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.420 [2024-07-15 19:37:15.106584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2429630 with addr=10.0.0.2, port=4420 00:31:04.420 [2024-07-15 19:37:15.106592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2429630 is same with the state(5) to be set 00:31:04.420 [2024-07-15 19:37:15.106602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429630 (9): Bad file descriptor 00:31:04.420 [2024-07-15 19:37:15.106612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.420 [2024-07-15 19:37:15.106618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.420 [2024-07-15 19:37:15.106625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.420 [2024-07-15 19:37:15.106634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.420 [2024-07-15 19:37:15.113134] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:04.420 [2024-07-15 19:37:15.113150] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:04.420 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.421 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.681 19:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.617 [2024-07-15 19:37:16.397504] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:05.617 [2024-07-15 19:37:16.397521] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:05.617 [2024-07-15 19:37:16.397532] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.876 [2024-07-15 19:37:16.483802] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:06.135 [2024-07-15 19:37:16.745696] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:06.135 [2024-07-15 19:37:16.745727] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.135 request: 00:31:06.135 { 00:31:06.135 "name": "nvme", 00:31:06.135 "trtype": "tcp", 00:31:06.135 "traddr": "10.0.0.2", 00:31:06.135 "adrfam": "ipv4", 00:31:06.135 "trsvcid": "8009", 00:31:06.135 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:06.135 "wait_for_attach": true, 00:31:06.135 "method": "bdev_nvme_start_discovery", 00:31:06.135 "req_id": 1 00:31:06.135 } 00:31:06.135 Got JSON-RPC error response 00:31:06.135 response: 00:31:06.135 { 00:31:06.135 "code": -17, 00:31:06.135 "message": "File exists" 00:31:06.135 } 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.135 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.136 request: 00:31:06.136 { 00:31:06.136 "name": "nvme_second", 00:31:06.136 "trtype": "tcp", 00:31:06.136 "traddr": "10.0.0.2", 00:31:06.136 "adrfam": "ipv4", 00:31:06.136 "trsvcid": "8009", 00:31:06.136 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:06.136 "wait_for_attach": true, 00:31:06.136 "method": "bdev_nvme_start_discovery", 00:31:06.136 "req_id": 1 00:31:06.136 } 00:31:06.136 Got JSON-RPC error response 00:31:06.136 response: 00:31:06.136 { 00:31:06.136 "code": -17, 00:31:06.136 "message": "File exists" 00:31:06.136 } 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.136 19:37:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.513 [2024-07-15 19:37:17.981106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.513 [2024-07-15 19:37:17.981134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242a370 with addr=10.0.0.2, port=8010 00:31:07.513 [2024-07-15 19:37:17.981145] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:07.513 [2024-07-15 19:37:17.981152] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:07.513 [2024-07-15 19:37:17.981158] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:08.487 [2024-07-15 19:37:18.983671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.487 [2024-07-15 19:37:18.983696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242a370 with addr=10.0.0.2, port=8010 00:31:08.487 [2024-07-15 19:37:18.983707] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.487 [2024-07-15 19:37:18.983713] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.487 [2024-07-15 19:37:18.983719] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:09.420 [2024-07-15 19:37:19.985782] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:09.420 request: 00:31:09.420 { 00:31:09.420 "name": "nvme_second", 00:31:09.420 "trtype": "tcp", 00:31:09.420 "traddr": "10.0.0.2", 00:31:09.420 "adrfam": "ipv4", 00:31:09.420 "trsvcid": "8010", 00:31:09.420 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:09.420 "wait_for_attach": false, 00:31:09.420 "attach_timeout_ms": 3000, 00:31:09.420 "method": "bdev_nvme_start_discovery", 00:31:09.420 "req_id": 1 00:31:09.420 } 00:31:09.420 Got JSON-RPC error response 00:31:09.420 response: 00:31:09.420 { 00:31:09.420 "code": -110, 00:31:09.420 "message": "Connection timed out" 00:31:09.420 } 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:09.420 19:37:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1790601 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.420 rmmod nvme_tcp 00:31:09.420 rmmod nvme_fabrics 00:31:09.420 rmmod nvme_keyring 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1790559 ']' 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1790559 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1790559 ']' 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1790559 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1790559 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1790559' 00:31:09.420 killing process with pid 1790559 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1790559 00:31:09.420 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1790559 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.678 19:37:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.579 19:37:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.579 00:31:11.579 real 0m16.396s 00:31:11.579 user 0m20.196s 00:31:11.579 sys 0m5.129s 00:31:11.579 19:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:11.579 19:37:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.579 ************************************ 00:31:11.579 END TEST nvmf_host_discovery 00:31:11.579 ************************************ 00:31:11.579 19:37:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:11.579 19:37:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:11.579 19:37:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:11.579 19:37:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.579 19:37:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.838 ************************************ 00:31:11.838 START TEST nvmf_host_multipath_status 00:31:11.838 ************************************ 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:11.838 * Looking for test storage... 00:31:11.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.838 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.839 19:37:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:17.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:17.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:17.211 Found net devices under 0000:86:00.0: cvl_0_0 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.211 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:17.212 Found net devices under 0000:86:00.1: cvl_0_1 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:17.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:31:17.212 00:31:17.212 --- 10.0.0.2 ping statistics --- 00:31:17.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.212 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:31:17.212 00:31:17.212 --- 10.0.0.1 ping statistics --- 00:31:17.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.212 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1795653 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1795653 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1795653 ']' 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:17.212 19:37:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.212 [2024-07-15 19:37:27.958313] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:17.212 [2024-07-15 19:37:27.958355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.212 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.212 [2024-07-15 19:37:27.988128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:17.212 [2024-07-15 19:37:28.015806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.212 [2024-07-15 19:37:28.056816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.212 [2024-07-15 19:37:28.056856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.212 [2024-07-15 19:37:28.056863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.212 [2024-07-15 19:37:28.056869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.212 [2024-07-15 19:37:28.056874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.212 [2024-07-15 19:37:28.056915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.212 [2024-07-15 19:37:28.056918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1795653 00:31:17.471 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:17.729 [2024-07-15 19:37:28.330424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.729 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:17.729 Malloc0 00:31:17.729 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:17.988 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.247 19:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.247 [2024-07-15 19:37:29.023409] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.247 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:18.507 [2024-07-15 19:37:29.203910] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1795874 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1795874 /var/tmp/bdevperf.sock 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1795874 ']' 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:18.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.507 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:18.766 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.766 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:18.766 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:18.766 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:19.333 Nvme0n1 00:31:19.333 19:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:19.591 Nvme0n1 00:31:19.850 19:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:19.850 19:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:21.753 19:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:21.753 19:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:22.012 19:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:22.012 19:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:23.389 19:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:23.389 19:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:23.389 19:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.389 19:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.389 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.389 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.390 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:23.649 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.649 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:23.649 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.649 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.908 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.167 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.167 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:24.167 19:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:24.426 19:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:24.685 19:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:25.620 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:25.620 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:25.620 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.620 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.879 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.138 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.138 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.138 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.138 19:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.397 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.655 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.655 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:26.655 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:26.913 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:26.913 19:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.292 19:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.292 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.292 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.569 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.828 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.828 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.828 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.828 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:29.087 19:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.346 19:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:29.606 19:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:30.543 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:30.543 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.543 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.543 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.805 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.111 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.111 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.111 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.111 19:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.389 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.649 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.649 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:31.649 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:31.908 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:31.908 19:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.283 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.284 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.284 19:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.284 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.284 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.284 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.284 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.542 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.542 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.542 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.542 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.800 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.060 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.060 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:34.060 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:34.319 19:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:34.319 19:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.696 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.955 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.955 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.955 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.955 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.214 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.214 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:36.214 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.214 19:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.214 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.214 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:36.214 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.214 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.473 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.473 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:36.732 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:36.732 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:36.991 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:36.991 19:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:38.369 19:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:38.369 19:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:38.369 19:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.369 19:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.369 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.628 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.628 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.628 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.628 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.887 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.146 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.146 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:39.146 19:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:39.405 19:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:39.664 19:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:40.600 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:40.600 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:40.600 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.601 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.859 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.860 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.118 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.118 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.118 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.118 19:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.377 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.377 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.377 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.377 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:41.636 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:41.894 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:42.153 19:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:43.089 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:43.089 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.089 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.089 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.348 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.348 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:43.348 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.348 19:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.348 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.348 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.348 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.348 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:43.608 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.608 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:43.608 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.608 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.867 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.867 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:43.867 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.867 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.126 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.126 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.126 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.126 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.126 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.127 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:44.127 19:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:44.385 19:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:44.643 19:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:45.580 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:45.580 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:45.580 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.580 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:45.839 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.839 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:45.839 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.839 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.098 19:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.387 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.387 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.387 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.387 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1795874 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1795874 ']' 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1795874 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1795874 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1795874' 00:31:46.645 killing process with pid 1795874 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1795874 00:31:46.645 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1795874 00:31:46.920 Connection closed with partial response: 00:31:46.920 00:31:46.920 00:31:46.920 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1795874 00:31:46.920 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:46.920 [2024-07-15 19:37:29.262258] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:46.920 [2024-07-15 19:37:29.262310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795874 ] 00:31:46.920 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.920 [2024-07-15 19:37:29.288470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:46.921 [2024-07-15 19:37:29.313512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.921 [2024-07-15 19:37:29.352823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.921 Running I/O for 90 seconds... 00:31:46.921 [2024-07-15 19:37:42.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.515595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.515604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.921 [2024-07-15 19:37:42.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.921 [2024-07-15 19:37:42.516684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.921 [2024-07-15 19:37:42.516698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.516981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.516989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.922 [2024-07-15 19:37:42.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.922 [2024-07-15 19:37:42.517500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.517985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.923 [2024-07-15 19:37:42.518307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.923 [2024-07-15 19:37:42.518327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:42.518780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:42.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:42.518992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.317991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.317998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.318010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.318017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.318029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.318037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.318051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.924 [2024-07-15 19:37:55.318060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.318072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.318079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.924 [2024-07-15 19:37:55.318092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.924 [2024-07-15 19:37:55.318102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.318544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.318551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.925 [2024-07-15 19:37:55.319374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.925 [2024-07-15 19:37:55.319381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.319741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.319991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.319997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.926 [2024-07-15 19:37:55.320194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.320214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.320239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.320258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.926 [2024-07-15 19:37:55.320277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.926 [2024-07-15 19:37:55.320289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.320336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.320982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.320995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.321152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.321172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.321191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.321210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.321367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.321375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.927 [2024-07-15 19:37:55.323341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.927 [2024-07-15 19:37:55.323510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.927 [2024-07-15 19:37:55.323517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.323539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.323560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.323638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.323757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.323777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.323869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.323876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.928 [2024-07-15 19:37:55.324591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.928 [2024-07-15 19:37:55.324611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.928 [2024-07-15 19:37:55.324623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.324630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.324642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.324649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.324662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.324669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.324681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.324701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.324708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.324721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.324728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.325911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.325927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.325943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.325950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.325964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.325971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.325983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.325993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.326498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.326529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.326537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.328686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.929 [2024-07-15 19:37:55.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.328719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.929 [2024-07-15 19:37:55.328726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.929 [2024-07-15 19:37:55.328740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.328766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.328785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.328804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.328826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.930 [2024-07-15 19:37:55.329978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.329991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.329999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.330018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.330031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.330039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.330055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.330063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.930 [2024-07-15 19:37:55.330076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.930 [2024-07-15 19:37:55.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.330795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.330914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.330926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.338269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.338982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.338996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.339023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.339043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.339064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.339086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.931 [2024-07-15 19:37:55.339105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.339125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.931 [2024-07-15 19:37:55.339146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.931 [2024-07-15 19:37:55.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.339478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.339511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.339518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.340930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.340982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.932 [2024-07-15 19:37:55.340990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.341003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.341009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.341022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.341029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.341049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.932 [2024-07-15 19:37:55.341063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.932 [2024-07-15 19:37:55.341071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.341091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.341151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.341194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.341300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.341333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.342982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.342999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.933 [2024-07-15 19:37:55.343348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.933 [2024-07-15 19:37:55.343361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.933 [2024-07-15 19:37:55.343369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.343427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.343447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.343467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.343547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.343566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.343582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.343589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.934 [2024-07-15 19:37:55.344471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.934 [2024-07-15 19:37:55.344549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.934 [2024-07-15 19:37:55.344557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.344569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.344577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.344589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.344596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.344609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.344616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.344997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.935 [2024-07-15 19:37:55.345405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.935 [2024-07-15 19:37:55.345425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.935 [2024-07-15 19:37:55.345437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.345444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.345464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.345477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.345496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.345506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.345519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.345526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.346488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.347202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.347222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.936 [2024-07-15 19:37:55.347270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.936 [2024-07-15 19:37:55.347283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.936 [2024-07-15 19:37:55.347291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.347312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.347333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.347352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.347374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.347393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.347407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.348319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.348353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.348361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.349293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.349319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.349341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.349362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.937 [2024-07-15 19:37:55.349383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.349404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.937 [2024-07-15 19:37:55.349448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.937 [2024-07-15 19:37:55.349462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.349909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.349923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.349930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.351563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.938 [2024-07-15 19:37:55.351588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.351610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.351631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.351652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.351673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.938 [2024-07-15 19:37:55.351693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.938 [2024-07-15 19:37:55.351706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.351714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.351734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.351755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.351881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.351996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.352003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.352024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.352044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.352064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.352107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.352253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.939 [2024-07-15 19:37:55.352260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.353439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.353474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.353482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.353496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.939 [2024-07-15 19:37:55.353505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.939 [2024-07-15 19:37:55.353521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.353573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.353594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.353614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.353635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.353781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.353794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.353802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.354416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.354437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.354480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.940 [2024-07-15 19:37:55.354501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.940 [2024-07-15 19:37:55.354542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.940 [2024-07-15 19:37:55.354555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.354566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.354578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.354586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.354599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.354607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.354620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.354628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.354642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.354650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.355046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.355092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.355113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.355259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.355266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.356283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.356304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.941 [2024-07-15 19:37:55.356369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.356391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.356413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.941 [2024-07-15 19:37:55.356427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.941 [2024-07-15 19:37:55.356435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.942 [2024-07-15 19:37:55.356830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.356887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.356895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.942 [2024-07-15 19:37:55.358618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.942 [2024-07-15 19:37:55.358626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.358945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.358979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.358986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.360054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.360164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.360184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.360229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.943 [2024-07-15 19:37:55.360253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.943 [2024-07-15 19:37:55.360311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.943 [2024-07-15 19:37:55.360318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.360877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.360884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.363311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.363327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.944 [2024-07-15 19:37:55.363335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.944 [2024-07-15 19:37:55.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.944 [2024-07-15 19:37:55.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.945 [2024-07-15 19:37:55.363968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.945 [2024-07-15 19:37:55.363981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.945 [2024-07-15 19:37:55.363989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.946 [2024-07-15 19:37:55.364905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.364978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.364986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.365000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.946 [2024-07-15 19:37:55.365008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:46.946 [2024-07-15 19:37:55.365020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.365723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.365779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.365788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.367042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.367066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.367088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.947 [2024-07-15 19:37:55.367108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.367131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.367151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:46.947 [2024-07-15 19:37:55.367163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.947 [2024-07-15 19:37:55.367171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.948 [2024-07-15 19:37:55.367661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.367696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.367704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.369177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.369195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.369210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.369219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.369238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.369246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.369260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:46.948 [2024-07-15 19:37:55.369284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.948 [2024-07-15 19:37:55.369292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.369748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.369762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.369769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.370708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.370741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.370762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.370783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.949 [2024-07-15 19:37:55.370824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:46.949 [2024-07-15 19:37:55.370837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.949 [2024-07-15 19:37:55.370845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.370886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.370906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.370927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.370948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.370969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.370984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.370992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.950 [2024-07-15 19:37:55.371138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:46.950 [2024-07-15 19:37:55.371284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.950 [2024-07-15 19:37:55.371292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:46.950 Received shutdown signal, test time was about 26.897757 seconds 00:31:46.950 00:31:46.950 Latency(us) 00:31:46.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.950 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:46.950 Verification LBA range: start 0x0 length 0x4000 00:31:46.950 Nvme0n1 : 26.90 10398.85 40.62 0.00 0.00 12289.29 197.68 3019898.88 00:31:46.950 =================================================================================================================== 00:31:46.950 Total : 10398.85 40.62 0.00 0.00 12289.29 197.68 3019898.88 00:31:46.950 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:47.208 rmmod nvme_tcp 00:31:47.208 rmmod nvme_fabrics 00:31:47.208 rmmod nvme_keyring 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1795653 ']' 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1795653 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1795653 ']' 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1795653 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1795653 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1795653' 00:31:47.208 killing process with pid 1795653 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1795653 00:31:47.208 19:37:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1795653 00:31:47.466 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:47.467 19:37:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.369 19:38:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:49.369 00:31:49.369 real 0m37.775s 00:31:49.369 user 1m42.406s 00:31:49.369 sys 0m10.241s 00:31:49.369 19:38:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:49.370 19:38:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:49.370 ************************************ 00:31:49.370 END TEST nvmf_host_multipath_status 00:31:49.370 ************************************ 00:31:49.629 19:38:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:49.629 19:38:00 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:49.629 19:38:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:49.629 19:38:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:49.629 19:38:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.629 ************************************ 00:31:49.629 START TEST nvmf_discovery_remove_ifc 00:31:49.629 ************************************ 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:49.629 * Looking for test storage... 00:31:49.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:49.629 19:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:54.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:54.911 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:54.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:54.912 Found net devices under 0000:86:00.0: cvl_0_0 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:54.912 Found net devices under 0000:86:00.1: cvl_0_1 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:54.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:31:54.912 00:31:54.912 --- 10.0.0.2 ping statistics --- 00:31:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.912 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:31:54.912 00:31:54.912 --- 10.0.0.1 ping statistics --- 00:31:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.912 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1803976 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1803976 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1803976 ']' 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.912 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.912 [2024-07-15 19:38:05.751411] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:54.912 [2024-07-15 19:38:05.751458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.172 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.172 [2024-07-15 19:38:05.781246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:55.172 [2024-07-15 19:38:05.808369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.172 [2024-07-15 19:38:05.848619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.172 [2024-07-15 19:38:05.848655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.172 [2024-07-15 19:38:05.848663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.172 [2024-07-15 19:38:05.848669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.172 [2024-07-15 19:38:05.848674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.172 [2024-07-15 19:38:05.848691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.172 19:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.172 [2024-07-15 19:38:05.977379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.172 [2024-07-15 19:38:05.985522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:55.172 null0 00:31:55.172 [2024-07-15 19:38:06.017526] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.430 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.430 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1804013 00:31:55.430 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:55.430 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1804013 /tmp/host.sock 00:31:55.430 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1804013 ']' 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:55.431 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.431 [2024-07-15 19:38:06.067791] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:31:55.431 [2024-07-15 19:38:06.067836] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804013 ] 00:31:55.431 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.431 [2024-07-15 19:38:06.094461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:55.431 [2024-07-15 19:38:06.121744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.431 [2024-07-15 19:38:06.163887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.431 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.689 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.689 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:55.689 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.689 19:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.626 [2024-07-15 19:38:07.344317] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:56.626 [2024-07-15 19:38:07.344335] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:56.626 [2024-07-15 19:38:07.344348] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:56.626 [2024-07-15 19:38:07.430612] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:56.885 [2024-07-15 19:38:07.487397] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:56.885 [2024-07-15 19:38:07.487439] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:56.885 [2024-07-15 19:38:07.487459] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:56.885 [2024-07-15 19:38:07.487473] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:56.885 [2024-07-15 19:38:07.487490] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.886 [2024-07-15 19:38:07.533794] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ea3440 was disconnected and freed. delete nvme_qpair. 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:56.886 19:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.265 19:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.202 19:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.140 19:38:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.078 19:38:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.458 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.458 [2024-07-15 19:38:12.929188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:02.458 [2024-07-15 19:38:12.929232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.458 [2024-07-15 19:38:12.929243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.458 [2024-07-15 19:38:12.929253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.458 [2024-07-15 19:38:12.929260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.458 [2024-07-15 19:38:12.929267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.458 [2024-07-15 19:38:12.929274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.458 [2024-07-15 19:38:12.929281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.459 [2024-07-15 19:38:12.929293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.459 [2024-07-15 19:38:12.929300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.459 [2024-07-15 19:38:12.929306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.459 [2024-07-15 19:38:12.929313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e69d30 is same with the state(5) to be set 00:32:02.459 [2024-07-15 19:38:12.939210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e69d30 (9): Bad file descriptor 00:32:02.459 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.459 19:38:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.459 [2024-07-15 19:38:12.949248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.415 19:38:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.415 [2024-07-15 19:38:13.986256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:03.415 [2024-07-15 19:38:13.986304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e69d30 with addr=10.0.0.2, port=4420 00:32:03.415 [2024-07-15 19:38:13.986321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e69d30 is same with the state(5) to be set 00:32:03.415 [2024-07-15 19:38:13.986354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e69d30 (9): Bad file descriptor 00:32:03.415 [2024-07-15 19:38:13.986785] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:03.415 [2024-07-15 19:38:13.986807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:03.415 [2024-07-15 19:38:13.986817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:03.415 [2024-07-15 19:38:13.986827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:03.415 [2024-07-15 19:38:13.986848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:03.415 [2024-07-15 19:38:13.986858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:03.415 19:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.415 19:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.415 19:38:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.430 [2024-07-15 19:38:14.989343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:04.430 [2024-07-15 19:38:14.989367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:04.430 [2024-07-15 19:38:14.989374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:04.430 [2024-07-15 19:38:14.989381] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:04.430 [2024-07-15 19:38:14.989394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:04.430 [2024-07-15 19:38:14.989417] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:04.430 [2024-07-15 19:38:14.989439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.430 [2024-07-15 19:38:14.989448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.430 [2024-07-15 19:38:14.989459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.430 [2024-07-15 19:38:14.989465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.430 [2024-07-15 19:38:14.989472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.430 [2024-07-15 19:38:14.989479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.430 [2024-07-15 19:38:14.989486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.430 [2024-07-15 19:38:14.989492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.430 [2024-07-15 19:38:14.989500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.430 [2024-07-15 19:38:14.989506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.430 [2024-07-15 19:38:14.989512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:04.430 [2024-07-15 19:38:14.989592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e69180 (9): Bad file descriptor 00:32:04.430 [2024-07-15 19:38:14.990602] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:04.430 [2024-07-15 19:38:14.990614] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:04.431 19:38:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.370 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.629 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:05.629 19:38:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.197 [2024-07-15 19:38:17.040365] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:06.197 [2024-07-15 19:38:17.040384] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:06.197 [2024-07-15 19:38:17.040399] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:06.457 [2024-07-15 19:38:17.127662] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:06.457 [2024-07-15 19:38:17.189969] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:06.457 [2024-07-15 19:38:17.190003] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:06.457 [2024-07-15 19:38:17.190021] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:06.457 [2024-07-15 19:38:17.190034] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:06.457 [2024-07-15 19:38:17.190042] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:06.457 [2024-07-15 19:38:17.197827] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e81a40 was disconnected and freed. delete nvme_qpair. 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1804013 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1804013 ']' 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1804013 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.457 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1804013 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1804013' 00:32:06.717 killing process with pid 1804013 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1804013 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1804013 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:06.717 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:06.718 rmmod nvme_tcp 00:32:06.718 rmmod nvme_fabrics 00:32:06.718 rmmod nvme_keyring 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1803976 ']' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1803976 ']' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803976' 00:32:06.978 killing process with pid 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1803976 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:06.978 19:38:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.517 19:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.517 00:32:09.517 real 0m19.576s 00:32:09.517 user 0m24.282s 00:32:09.517 sys 0m5.152s 00:32:09.517 19:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:09.517 19:38:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.517 ************************************ 00:32:09.517 END TEST nvmf_discovery_remove_ifc 00:32:09.517 ************************************ 00:32:09.517 19:38:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:09.517 19:38:19 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:09.517 19:38:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:09.517 19:38:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.517 19:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.517 ************************************ 00:32:09.517 START TEST nvmf_identify_kernel_target 00:32:09.517 ************************************ 00:32:09.517 19:38:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:09.517 * Looking for test storage... 00:32:09.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:09.517 19:38:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:14.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:14.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:14.795 Found net devices under 0000:86:00.0: cvl_0_0 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:14.795 Found net devices under 0000:86:00.1: cvl_0_1 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:14.795 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:14.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:32:14.796 00:32:14.796 --- 10.0.0.2 ping statistics --- 00:32:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.796 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:32:14.796 00:32:14.796 --- 10.0.0.1 ping statistics --- 00:32:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.796 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:14.796 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:15.054 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:15.054 19:38:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:17.594 Waiting for block devices as requested 00:32:17.594 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:17.594 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:17.594 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:17.594 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:17.594 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:17.594 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:17.594 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:17.854 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:17.854 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:17.854 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:18.114 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:18.114 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:18.114 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:18.114 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:18.374 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:18.374 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:18.374 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:18.634 No valid GPT data, bailing 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:18.634 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:18.634 00:32:18.634 Discovery Log Number of Records 2, Generation counter 2 00:32:18.634 =====Discovery Log Entry 0====== 00:32:18.634 trtype: tcp 00:32:18.634 adrfam: ipv4 00:32:18.634 subtype: current discovery subsystem 00:32:18.634 treq: not specified, sq flow control disable supported 00:32:18.634 portid: 1 00:32:18.634 trsvcid: 4420 00:32:18.634 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:18.634 traddr: 10.0.0.1 00:32:18.634 eflags: none 00:32:18.634 sectype: none 00:32:18.634 =====Discovery Log Entry 1====== 00:32:18.634 trtype: tcp 00:32:18.634 adrfam: ipv4 00:32:18.634 subtype: nvme subsystem 00:32:18.634 treq: not specified, sq flow control disable supported 00:32:18.634 portid: 1 00:32:18.634 trsvcid: 4420 00:32:18.634 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:18.635 traddr: 10.0.0.1 00:32:18.635 eflags: none 00:32:18.635 sectype: none 00:32:18.635 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:18.635 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:18.635 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.635 ===================================================== 00:32:18.635 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:18.635 ===================================================== 00:32:18.635 Controller Capabilities/Features 00:32:18.635 ================================ 00:32:18.635 Vendor ID: 0000 00:32:18.635 Subsystem Vendor ID: 0000 00:32:18.635 Serial Number: 68a7fcfc60ac3470cd75 00:32:18.635 Model Number: Linux 00:32:18.635 Firmware Version: 6.7.0-68 00:32:18.635 Recommended Arb Burst: 0 00:32:18.635 IEEE OUI Identifier: 00 00 00 00:32:18.635 Multi-path I/O 00:32:18.635 May have multiple subsystem ports: No 00:32:18.635 May have multiple controllers: No 00:32:18.635 Associated with SR-IOV VF: No 00:32:18.635 Max Data Transfer Size: Unlimited 00:32:18.635 Max Number of Namespaces: 0 00:32:18.635 Max Number of I/O Queues: 1024 00:32:18.635 NVMe Specification Version (VS): 1.3 00:32:18.635 NVMe Specification Version (Identify): 1.3 00:32:18.635 Maximum Queue Entries: 1024 00:32:18.635 Contiguous Queues Required: No 00:32:18.635 Arbitration Mechanisms Supported 00:32:18.635 Weighted Round Robin: Not Supported 00:32:18.635 Vendor Specific: Not Supported 00:32:18.635 Reset Timeout: 7500 ms 00:32:18.635 Doorbell Stride: 4 bytes 00:32:18.635 NVM Subsystem Reset: Not Supported 00:32:18.635 Command Sets Supported 00:32:18.635 NVM Command Set: Supported 00:32:18.635 Boot Partition: Not Supported 00:32:18.635 Memory Page Size Minimum: 4096 bytes 00:32:18.635 Memory Page Size Maximum: 4096 bytes 00:32:18.635 Persistent Memory Region: Not Supported 00:32:18.635 Optional Asynchronous Events Supported 00:32:18.635 Namespace Attribute Notices: Not Supported 00:32:18.635 Firmware Activation Notices: Not Supported 00:32:18.635 ANA Change Notices: Not Supported 00:32:18.635 PLE Aggregate Log Change Notices: Not Supported 00:32:18.635 LBA Status Info Alert Notices: Not Supported 00:32:18.635 EGE Aggregate Log Change Notices: Not Supported 00:32:18.635 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.635 Zone Descriptor Change Notices: Not Supported 00:32:18.635 Discovery Log Change Notices: Supported 00:32:18.635 Controller Attributes 00:32:18.635 128-bit Host Identifier: Not Supported 00:32:18.635 Non-Operational Permissive Mode: Not Supported 00:32:18.635 NVM Sets: Not Supported 00:32:18.635 Read Recovery Levels: Not Supported 00:32:18.635 Endurance Groups: Not Supported 00:32:18.635 Predictable Latency Mode: Not Supported 00:32:18.635 Traffic Based Keep ALive: Not Supported 00:32:18.635 Namespace Granularity: Not Supported 00:32:18.635 SQ Associations: Not Supported 00:32:18.635 UUID List: Not Supported 00:32:18.635 Multi-Domain Subsystem: Not Supported 00:32:18.635 Fixed Capacity Management: Not Supported 00:32:18.635 Variable Capacity Management: Not Supported 00:32:18.635 Delete Endurance Group: Not Supported 00:32:18.635 Delete NVM Set: Not Supported 00:32:18.635 Extended LBA Formats Supported: Not Supported 00:32:18.635 Flexible Data Placement Supported: Not Supported 00:32:18.635 00:32:18.635 Controller Memory Buffer Support 00:32:18.635 ================================ 00:32:18.635 Supported: No 00:32:18.635 00:32:18.635 Persistent Memory Region Support 00:32:18.635 ================================ 00:32:18.635 Supported: No 00:32:18.635 00:32:18.635 Admin Command Set Attributes 00:32:18.635 ============================ 00:32:18.635 Security Send/Receive: Not Supported 00:32:18.635 Format NVM: Not Supported 00:32:18.635 Firmware Activate/Download: Not Supported 00:32:18.635 Namespace Management: Not Supported 00:32:18.635 Device Self-Test: Not Supported 00:32:18.635 Directives: Not Supported 00:32:18.635 NVMe-MI: Not Supported 00:32:18.635 Virtualization Management: Not Supported 00:32:18.635 Doorbell Buffer Config: Not Supported 00:32:18.635 Get LBA Status Capability: Not Supported 00:32:18.635 Command & Feature Lockdown Capability: Not Supported 00:32:18.635 Abort Command Limit: 1 00:32:18.635 Async Event Request Limit: 1 00:32:18.635 Number of Firmware Slots: N/A 00:32:18.635 Firmware Slot 1 Read-Only: N/A 00:32:18.635 Firmware Activation Without Reset: N/A 00:32:18.635 Multiple Update Detection Support: N/A 00:32:18.635 Firmware Update Granularity: No Information Provided 00:32:18.635 Per-Namespace SMART Log: No 00:32:18.635 Asymmetric Namespace Access Log Page: Not Supported 00:32:18.635 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:18.635 Command Effects Log Page: Not Supported 00:32:18.635 Get Log Page Extended Data: Supported 00:32:18.635 Telemetry Log Pages: Not Supported 00:32:18.635 Persistent Event Log Pages: Not Supported 00:32:18.635 Supported Log Pages Log Page: May Support 00:32:18.635 Commands Supported & Effects Log Page: Not Supported 00:32:18.635 Feature Identifiers & Effects Log Page:May Support 00:32:18.635 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.635 Data Area 4 for Telemetry Log: Not Supported 00:32:18.635 Error Log Page Entries Supported: 1 00:32:18.635 Keep Alive: Not Supported 00:32:18.635 00:32:18.635 NVM Command Set Attributes 00:32:18.635 ========================== 00:32:18.635 Submission Queue Entry Size 00:32:18.635 Max: 1 00:32:18.635 Min: 1 00:32:18.635 Completion Queue Entry Size 00:32:18.635 Max: 1 00:32:18.635 Min: 1 00:32:18.635 Number of Namespaces: 0 00:32:18.635 Compare Command: Not Supported 00:32:18.635 Write Uncorrectable Command: Not Supported 00:32:18.635 Dataset Management Command: Not Supported 00:32:18.635 Write Zeroes Command: Not Supported 00:32:18.635 Set Features Save Field: Not Supported 00:32:18.635 Reservations: Not Supported 00:32:18.635 Timestamp: Not Supported 00:32:18.635 Copy: Not Supported 00:32:18.635 Volatile Write Cache: Not Present 00:32:18.635 Atomic Write Unit (Normal): 1 00:32:18.635 Atomic Write Unit (PFail): 1 00:32:18.635 Atomic Compare & Write Unit: 1 00:32:18.635 Fused Compare & Write: Not Supported 00:32:18.635 Scatter-Gather List 00:32:18.635 SGL Command Set: Supported 00:32:18.635 SGL Keyed: Not Supported 00:32:18.635 SGL Bit Bucket Descriptor: Not Supported 00:32:18.635 SGL Metadata Pointer: Not Supported 00:32:18.635 Oversized SGL: Not Supported 00:32:18.635 SGL Metadata Address: Not Supported 00:32:18.635 SGL Offset: Supported 00:32:18.635 Transport SGL Data Block: Not Supported 00:32:18.635 Replay Protected Memory Block: Not Supported 00:32:18.635 00:32:18.635 Firmware Slot Information 00:32:18.635 ========================= 00:32:18.635 Active slot: 0 00:32:18.635 00:32:18.635 00:32:18.635 Error Log 00:32:18.635 ========= 00:32:18.635 00:32:18.635 Active Namespaces 00:32:18.635 ================= 00:32:18.635 Discovery Log Page 00:32:18.635 ================== 00:32:18.635 Generation Counter: 2 00:32:18.635 Number of Records: 2 00:32:18.635 Record Format: 0 00:32:18.635 00:32:18.635 Discovery Log Entry 0 00:32:18.635 ---------------------- 00:32:18.635 Transport Type: 3 (TCP) 00:32:18.635 Address Family: 1 (IPv4) 00:32:18.635 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:18.635 Entry Flags: 00:32:18.635 Duplicate Returned Information: 0 00:32:18.635 Explicit Persistent Connection Support for Discovery: 0 00:32:18.635 Transport Requirements: 00:32:18.635 Secure Channel: Not Specified 00:32:18.635 Port ID: 1 (0x0001) 00:32:18.635 Controller ID: 65535 (0xffff) 00:32:18.635 Admin Max SQ Size: 32 00:32:18.635 Transport Service Identifier: 4420 00:32:18.635 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:18.635 Transport Address: 10.0.0.1 00:32:18.635 Discovery Log Entry 1 00:32:18.635 ---------------------- 00:32:18.635 Transport Type: 3 (TCP) 00:32:18.635 Address Family: 1 (IPv4) 00:32:18.635 Subsystem Type: 2 (NVM Subsystem) 00:32:18.635 Entry Flags: 00:32:18.635 Duplicate Returned Information: 0 00:32:18.635 Explicit Persistent Connection Support for Discovery: 0 00:32:18.635 Transport Requirements: 00:32:18.635 Secure Channel: Not Specified 00:32:18.635 Port ID: 1 (0x0001) 00:32:18.635 Controller ID: 65535 (0xffff) 00:32:18.635 Admin Max SQ Size: 32 00:32:18.635 Transport Service Identifier: 4420 00:32:18.635 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:18.635 Transport Address: 10.0.0.1 00:32:18.635 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.896 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.896 get_feature(0x01) failed 00:32:18.896 get_feature(0x02) failed 00:32:18.896 get_feature(0x04) failed 00:32:18.896 ===================================================== 00:32:18.896 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:18.896 ===================================================== 00:32:18.896 Controller Capabilities/Features 00:32:18.896 ================================ 00:32:18.896 Vendor ID: 0000 00:32:18.896 Subsystem Vendor ID: 0000 00:32:18.896 Serial Number: 1e72f0f396ec1d62fca8 00:32:18.896 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:18.896 Firmware Version: 6.7.0-68 00:32:18.896 Recommended Arb Burst: 6 00:32:18.896 IEEE OUI Identifier: 00 00 00 00:32:18.896 Multi-path I/O 00:32:18.896 May have multiple subsystem ports: Yes 00:32:18.896 May have multiple controllers: Yes 00:32:18.896 Associated with SR-IOV VF: No 00:32:18.896 Max Data Transfer Size: Unlimited 00:32:18.896 Max Number of Namespaces: 1024 00:32:18.896 Max Number of I/O Queues: 128 00:32:18.896 NVMe Specification Version (VS): 1.3 00:32:18.896 NVMe Specification Version (Identify): 1.3 00:32:18.896 Maximum Queue Entries: 1024 00:32:18.896 Contiguous Queues Required: No 00:32:18.896 Arbitration Mechanisms Supported 00:32:18.896 Weighted Round Robin: Not Supported 00:32:18.896 Vendor Specific: Not Supported 00:32:18.896 Reset Timeout: 7500 ms 00:32:18.896 Doorbell Stride: 4 bytes 00:32:18.896 NVM Subsystem Reset: Not Supported 00:32:18.896 Command Sets Supported 00:32:18.896 NVM Command Set: Supported 00:32:18.896 Boot Partition: Not Supported 00:32:18.896 Memory Page Size Minimum: 4096 bytes 00:32:18.896 Memory Page Size Maximum: 4096 bytes 00:32:18.896 Persistent Memory Region: Not Supported 00:32:18.896 Optional Asynchronous Events Supported 00:32:18.896 Namespace Attribute Notices: Supported 00:32:18.896 Firmware Activation Notices: Not Supported 00:32:18.896 ANA Change Notices: Supported 00:32:18.896 PLE Aggregate Log Change Notices: Not Supported 00:32:18.896 LBA Status Info Alert Notices: Not Supported 00:32:18.896 EGE Aggregate Log Change Notices: Not Supported 00:32:18.896 Normal NVM Subsystem Shutdown event: Not Supported 00:32:18.896 Zone Descriptor Change Notices: Not Supported 00:32:18.896 Discovery Log Change Notices: Not Supported 00:32:18.896 Controller Attributes 00:32:18.897 128-bit Host Identifier: Supported 00:32:18.897 Non-Operational Permissive Mode: Not Supported 00:32:18.897 NVM Sets: Not Supported 00:32:18.897 Read Recovery Levels: Not Supported 00:32:18.897 Endurance Groups: Not Supported 00:32:18.897 Predictable Latency Mode: Not Supported 00:32:18.897 Traffic Based Keep ALive: Supported 00:32:18.897 Namespace Granularity: Not Supported 00:32:18.897 SQ Associations: Not Supported 00:32:18.897 UUID List: Not Supported 00:32:18.897 Multi-Domain Subsystem: Not Supported 00:32:18.897 Fixed Capacity Management: Not Supported 00:32:18.897 Variable Capacity Management: Not Supported 00:32:18.897 Delete Endurance Group: Not Supported 00:32:18.897 Delete NVM Set: Not Supported 00:32:18.897 Extended LBA Formats Supported: Not Supported 00:32:18.897 Flexible Data Placement Supported: Not Supported 00:32:18.897 00:32:18.897 Controller Memory Buffer Support 00:32:18.897 ================================ 00:32:18.897 Supported: No 00:32:18.897 00:32:18.897 Persistent Memory Region Support 00:32:18.897 ================================ 00:32:18.897 Supported: No 00:32:18.897 00:32:18.897 Admin Command Set Attributes 00:32:18.897 ============================ 00:32:18.897 Security Send/Receive: Not Supported 00:32:18.897 Format NVM: Not Supported 00:32:18.897 Firmware Activate/Download: Not Supported 00:32:18.897 Namespace Management: Not Supported 00:32:18.897 Device Self-Test: Not Supported 00:32:18.897 Directives: Not Supported 00:32:18.897 NVMe-MI: Not Supported 00:32:18.897 Virtualization Management: Not Supported 00:32:18.897 Doorbell Buffer Config: Not Supported 00:32:18.897 Get LBA Status Capability: Not Supported 00:32:18.897 Command & Feature Lockdown Capability: Not Supported 00:32:18.897 Abort Command Limit: 4 00:32:18.897 Async Event Request Limit: 4 00:32:18.897 Number of Firmware Slots: N/A 00:32:18.897 Firmware Slot 1 Read-Only: N/A 00:32:18.897 Firmware Activation Without Reset: N/A 00:32:18.897 Multiple Update Detection Support: N/A 00:32:18.897 Firmware Update Granularity: No Information Provided 00:32:18.897 Per-Namespace SMART Log: Yes 00:32:18.897 Asymmetric Namespace Access Log Page: Supported 00:32:18.897 ANA Transition Time : 10 sec 00:32:18.897 00:32:18.897 Asymmetric Namespace Access Capabilities 00:32:18.897 ANA Optimized State : Supported 00:32:18.897 ANA Non-Optimized State : Supported 00:32:18.897 ANA Inaccessible State : Supported 00:32:18.897 ANA Persistent Loss State : Supported 00:32:18.897 ANA Change State : Supported 00:32:18.897 ANAGRPID is not changed : No 00:32:18.897 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:18.897 00:32:18.897 ANA Group Identifier Maximum : 128 00:32:18.897 Number of ANA Group Identifiers : 128 00:32:18.897 Max Number of Allowed Namespaces : 1024 00:32:18.897 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:18.897 Command Effects Log Page: Supported 00:32:18.897 Get Log Page Extended Data: Supported 00:32:18.897 Telemetry Log Pages: Not Supported 00:32:18.897 Persistent Event Log Pages: Not Supported 00:32:18.897 Supported Log Pages Log Page: May Support 00:32:18.897 Commands Supported & Effects Log Page: Not Supported 00:32:18.897 Feature Identifiers & Effects Log Page:May Support 00:32:18.897 NVMe-MI Commands & Effects Log Page: May Support 00:32:18.897 Data Area 4 for Telemetry Log: Not Supported 00:32:18.897 Error Log Page Entries Supported: 128 00:32:18.897 Keep Alive: Supported 00:32:18.897 Keep Alive Granularity: 1000 ms 00:32:18.897 00:32:18.897 NVM Command Set Attributes 00:32:18.897 ========================== 00:32:18.897 Submission Queue Entry Size 00:32:18.897 Max: 64 00:32:18.897 Min: 64 00:32:18.897 Completion Queue Entry Size 00:32:18.897 Max: 16 00:32:18.897 Min: 16 00:32:18.897 Number of Namespaces: 1024 00:32:18.897 Compare Command: Not Supported 00:32:18.897 Write Uncorrectable Command: Not Supported 00:32:18.897 Dataset Management Command: Supported 00:32:18.897 Write Zeroes Command: Supported 00:32:18.897 Set Features Save Field: Not Supported 00:32:18.897 Reservations: Not Supported 00:32:18.897 Timestamp: Not Supported 00:32:18.897 Copy: Not Supported 00:32:18.897 Volatile Write Cache: Present 00:32:18.897 Atomic Write Unit (Normal): 1 00:32:18.897 Atomic Write Unit (PFail): 1 00:32:18.897 Atomic Compare & Write Unit: 1 00:32:18.897 Fused Compare & Write: Not Supported 00:32:18.897 Scatter-Gather List 00:32:18.897 SGL Command Set: Supported 00:32:18.897 SGL Keyed: Not Supported 00:32:18.897 SGL Bit Bucket Descriptor: Not Supported 00:32:18.897 SGL Metadata Pointer: Not Supported 00:32:18.897 Oversized SGL: Not Supported 00:32:18.897 SGL Metadata Address: Not Supported 00:32:18.897 SGL Offset: Supported 00:32:18.897 Transport SGL Data Block: Not Supported 00:32:18.897 Replay Protected Memory Block: Not Supported 00:32:18.897 00:32:18.897 Firmware Slot Information 00:32:18.897 ========================= 00:32:18.897 Active slot: 0 00:32:18.897 00:32:18.897 Asymmetric Namespace Access 00:32:18.897 =========================== 00:32:18.897 Change Count : 0 00:32:18.897 Number of ANA Group Descriptors : 1 00:32:18.897 ANA Group Descriptor : 0 00:32:18.897 ANA Group ID : 1 00:32:18.897 Number of NSID Values : 1 00:32:18.897 Change Count : 0 00:32:18.897 ANA State : 1 00:32:18.897 Namespace Identifier : 1 00:32:18.897 00:32:18.897 Commands Supported and Effects 00:32:18.897 ============================== 00:32:18.897 Admin Commands 00:32:18.897 -------------- 00:32:18.897 Get Log Page (02h): Supported 00:32:18.897 Identify (06h): Supported 00:32:18.897 Abort (08h): Supported 00:32:18.897 Set Features (09h): Supported 00:32:18.897 Get Features (0Ah): Supported 00:32:18.897 Asynchronous Event Request (0Ch): Supported 00:32:18.897 Keep Alive (18h): Supported 00:32:18.897 I/O Commands 00:32:18.897 ------------ 00:32:18.897 Flush (00h): Supported 00:32:18.897 Write (01h): Supported LBA-Change 00:32:18.897 Read (02h): Supported 00:32:18.897 Write Zeroes (08h): Supported LBA-Change 00:32:18.897 Dataset Management (09h): Supported 00:32:18.897 00:32:18.897 Error Log 00:32:18.897 ========= 00:32:18.897 Entry: 0 00:32:18.897 Error Count: 0x3 00:32:18.897 Submission Queue Id: 0x0 00:32:18.897 Command Id: 0x5 00:32:18.897 Phase Bit: 0 00:32:18.897 Status Code: 0x2 00:32:18.897 Status Code Type: 0x0 00:32:18.897 Do Not Retry: 1 00:32:18.897 Error Location: 0x28 00:32:18.897 LBA: 0x0 00:32:18.897 Namespace: 0x0 00:32:18.897 Vendor Log Page: 0x0 00:32:18.897 ----------- 00:32:18.897 Entry: 1 00:32:18.897 Error Count: 0x2 00:32:18.897 Submission Queue Id: 0x0 00:32:18.897 Command Id: 0x5 00:32:18.897 Phase Bit: 0 00:32:18.897 Status Code: 0x2 00:32:18.897 Status Code Type: 0x0 00:32:18.897 Do Not Retry: 1 00:32:18.897 Error Location: 0x28 00:32:18.897 LBA: 0x0 00:32:18.897 Namespace: 0x0 00:32:18.897 Vendor Log Page: 0x0 00:32:18.897 ----------- 00:32:18.897 Entry: 2 00:32:18.897 Error Count: 0x1 00:32:18.897 Submission Queue Id: 0x0 00:32:18.897 Command Id: 0x4 00:32:18.897 Phase Bit: 0 00:32:18.897 Status Code: 0x2 00:32:18.897 Status Code Type: 0x0 00:32:18.897 Do Not Retry: 1 00:32:18.897 Error Location: 0x28 00:32:18.897 LBA: 0x0 00:32:18.897 Namespace: 0x0 00:32:18.897 Vendor Log Page: 0x0 00:32:18.897 00:32:18.897 Number of Queues 00:32:18.897 ================ 00:32:18.897 Number of I/O Submission Queues: 128 00:32:18.897 Number of I/O Completion Queues: 128 00:32:18.897 00:32:18.897 ZNS Specific Controller Data 00:32:18.897 ============================ 00:32:18.897 Zone Append Size Limit: 0 00:32:18.897 00:32:18.897 00:32:18.897 Active Namespaces 00:32:18.897 ================= 00:32:18.897 get_feature(0x05) failed 00:32:18.897 Namespace ID:1 00:32:18.897 Command Set Identifier: NVM (00h) 00:32:18.897 Deallocate: Supported 00:32:18.897 Deallocated/Unwritten Error: Not Supported 00:32:18.897 Deallocated Read Value: Unknown 00:32:18.897 Deallocate in Write Zeroes: Not Supported 00:32:18.897 Deallocated Guard Field: 0xFFFF 00:32:18.897 Flush: Supported 00:32:18.897 Reservation: Not Supported 00:32:18.897 Namespace Sharing Capabilities: Multiple Controllers 00:32:18.897 Size (in LBAs): 1953525168 (931GiB) 00:32:18.897 Capacity (in LBAs): 1953525168 (931GiB) 00:32:18.897 Utilization (in LBAs): 1953525168 (931GiB) 00:32:18.897 UUID: 8fdb590e-79a7-4902-be22-5f081191b5a1 00:32:18.897 Thin Provisioning: Not Supported 00:32:18.897 Per-NS Atomic Units: Yes 00:32:18.897 Atomic Boundary Size (Normal): 0 00:32:18.897 Atomic Boundary Size (PFail): 0 00:32:18.897 Atomic Boundary Offset: 0 00:32:18.898 NGUID/EUI64 Never Reused: No 00:32:18.898 ANA group ID: 1 00:32:18.898 Namespace Write Protected: No 00:32:18.898 Number of LBA Formats: 1 00:32:18.898 Current LBA Format: LBA Format #00 00:32:18.898 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:18.898 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.898 rmmod nvme_tcp 00:32:18.898 rmmod nvme_fabrics 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.898 19:38:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.806 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:20.806 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:20.806 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:20.806 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:21.065 19:38:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:23.602 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:23.602 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:24.539 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:24.539 00:32:24.539 real 0m15.379s 00:32:24.539 user 0m3.718s 00:32:24.539 sys 0m7.996s 00:32:24.539 19:38:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:24.539 19:38:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.539 ************************************ 00:32:24.539 END TEST nvmf_identify_kernel_target 00:32:24.539 ************************************ 00:32:24.539 19:38:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:24.539 19:38:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:24.539 19:38:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:24.539 19:38:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.539 19:38:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.539 ************************************ 00:32:24.539 START TEST nvmf_auth_host 00:32:24.539 ************************************ 00:32:24.539 19:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:24.798 * Looking for test storage... 00:32:24.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.798 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.799 19:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:30.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:30.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:30.139 Found net devices under 0000:86:00.0: cvl_0_0 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:30.139 Found net devices under 0000:86:00.1: cvl_0_1 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:30.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:32:30.139 00:32:30.139 --- 10.0.0.2 ping statistics --- 00:32:30.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.139 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:32:30.139 00:32:30.139 --- 10.0.0.1 ping statistics --- 00:32:30.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.139 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:30.139 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1815509 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1815509 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1815509 ']' 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:30.140 19:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ca7381f601aaf77c207b420a0937a35 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.axR 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ca7381f601aaf77c207b420a0937a35 0 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ca7381f601aaf77c207b420a0937a35 0 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ca7381f601aaf77c207b420a0937a35 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.399 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.axR 00:32:30.658 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.axR 00:32:30.658 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.axR 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afce626bb68011c82e5ebfb86c9b0df1de74fa1ab7de26aa6d56cd9cc838b044 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xoa 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afce626bb68011c82e5ebfb86c9b0df1de74fa1ab7de26aa6d56cd9cc838b044 3 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afce626bb68011c82e5ebfb86c9b0df1de74fa1ab7de26aa6d56cd9cc838b044 3 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afce626bb68011c82e5ebfb86c9b0df1de74fa1ab7de26aa6d56cd9cc838b044 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xoa 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xoa 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xoa 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d46b957bf6edc2b295f752fbdf19864c070e6cb82a30cda5 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Qwm 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d46b957bf6edc2b295f752fbdf19864c070e6cb82a30cda5 0 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d46b957bf6edc2b295f752fbdf19864c070e6cb82a30cda5 0 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d46b957bf6edc2b295f752fbdf19864c070e6cb82a30cda5 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Qwm 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Qwm 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Qwm 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8151d96b0d31c1f86be851638174876ba3b7972d9ac720b 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jsP 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8151d96b0d31c1f86be851638174876ba3b7972d9ac720b 2 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8151d96b0d31c1f86be851638174876ba3b7972d9ac720b 2 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8151d96b0d31c1f86be851638174876ba3b7972d9ac720b 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jsP 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jsP 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jsP 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bfe9513d875f9f10601c90c607c17be9 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Bof 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bfe9513d875f9f10601c90c607c17be9 1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bfe9513d875f9f10601c90c607c17be9 1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bfe9513d875f9f10601c90c607c17be9 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Bof 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Bof 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Bof 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=005a981b0ac3c0578ccbc9d545b2ce60 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Y1Y 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 005a981b0ac3c0578ccbc9d545b2ce60 1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 005a981b0ac3c0578ccbc9d545b2ce60 1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=005a981b0ac3c0578ccbc9d545b2ce60 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:30.659 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Y1Y 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Y1Y 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Y1Y 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f7cd8766fa3dd33433db87a5c08861bde95c34c62e91c2c 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DQW 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f7cd8766fa3dd33433db87a5c08861bde95c34c62e91c2c 2 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f7cd8766fa3dd33433db87a5c08861bde95c34c62e91c2c 2 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f7cd8766fa3dd33433db87a5c08861bde95c34c62e91c2c 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DQW 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DQW 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DQW 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=41f30417ae59066f10ef9ec3e2e5c981 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sZ8 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 41f30417ae59066f10ef9ec3e2e5c981 0 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 41f30417ae59066f10ef9ec3e2e5c981 0 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=41f30417ae59066f10ef9ec3e2e5c981 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sZ8 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sZ8 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sZ8 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b9549c270324737120d0321e2d063d77a1bf521d8dfa3129a5507f323919bcf 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Jp7 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b9549c270324737120d0321e2d063d77a1bf521d8dfa3129a5507f323919bcf 3 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b9549c270324737120d0321e2d063d77a1bf521d8dfa3129a5507f323919bcf 3 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b9549c270324737120d0321e2d063d77a1bf521d8dfa3129a5507f323919bcf 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Jp7 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Jp7 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Jp7 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:30.919 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1815509 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1815509 ']' 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:30.920 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.axR 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xoa ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xoa 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Qwm 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jsP ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jsP 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Bof 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Y1Y ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y1Y 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DQW 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sZ8 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sZ8 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Jp7 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:31.179 19:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:31.180 19:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:31.180 19:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.727 Waiting for block devices as requested 00:32:33.727 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:33.987 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:33.987 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:33.987 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:34.247 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:34.247 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:34.247 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:34.247 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:34.506 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:34.506 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:34.506 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:34.765 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:34.765 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:34.765 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:34.765 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:35.023 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:35.023 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:35.592 No valid GPT data, bailing 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:35.592 00:32:35.592 Discovery Log Number of Records 2, Generation counter 2 00:32:35.592 =====Discovery Log Entry 0====== 00:32:35.592 trtype: tcp 00:32:35.592 adrfam: ipv4 00:32:35.592 subtype: current discovery subsystem 00:32:35.592 treq: not specified, sq flow control disable supported 00:32:35.592 portid: 1 00:32:35.592 trsvcid: 4420 00:32:35.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:35.592 traddr: 10.0.0.1 00:32:35.592 eflags: none 00:32:35.592 sectype: none 00:32:35.592 =====Discovery Log Entry 1====== 00:32:35.592 trtype: tcp 00:32:35.592 adrfam: ipv4 00:32:35.592 subtype: nvme subsystem 00:32:35.592 treq: not specified, sq flow control disable supported 00:32:35.592 portid: 1 00:32:35.592 trsvcid: 4420 00:32:35.592 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:35.592 traddr: 10.0.0.1 00:32:35.592 eflags: none 00:32:35.592 sectype: none 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.592 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.852 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.852 nvme0n1 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.853 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 nvme0n1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 19:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.373 nvme0n1 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.373 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.633 nvme0n1 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.633 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 nvme0n1 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 nvme0n1 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.893 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.153 nvme0n1 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.153 19:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.412 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.413 nvme0n1 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.413 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.672 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.673 nvme0n1 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.673 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.932 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.933 nvme0n1 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.933 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.192 nvme0n1 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.192 19:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.192 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.192 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.192 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:38.193 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.452 nvme0n1 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.452 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.712 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.972 nvme0n1 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.972 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 nvme0n1 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.232 19:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.232 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.491 nvme0n1 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.491 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:39.492 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.751 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.011 nvme0n1 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.011 19:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.271 nvme0n1 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.271 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.272 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.531 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.790 nvme0n1 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.790 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.791 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.359 nvme0n1 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.359 19:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.359 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.360 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 nvme0n1 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.619 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.878 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.138 nvme0n1 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.138 19:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.707 nvme0n1 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:42.707 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.001 19:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.568 nvme0n1 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.568 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.137 nvme0n1 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.137 19:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.705 nvme0n1 00:32:44.705 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.705 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.706 19:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.275 nvme0n1 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.275 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.535 nvme0n1 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:45.535 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.536 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.795 nvme0n1 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.795 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.796 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.054 nvme0n1 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.054 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.314 nvme0n1 00:32:46.314 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.314 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.314 19:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.314 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.314 19:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.314 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.574 nvme0n1 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.574 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.833 nvme0n1 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:46.833 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.834 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.093 nvme0n1 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.093 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.353 nvme0n1 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.353 19:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.353 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.613 nvme0n1 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.613 nvme0n1 00:32:47.613 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.873 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.133 nvme0n1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.133 19:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.392 nvme0n1 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.392 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.393 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.652 nvme0n1 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.652 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.911 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.170 nvme0n1 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.170 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.171 19:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.430 nvme0n1 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.430 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.688 nvme0n1 00:32:49.688 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.688 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.688 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.688 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.688 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.947 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.207 nvme0n1 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.208 19:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.208 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.466 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.466 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 nvme0n1 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.725 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.726 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.292 nvme0n1 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.292 19:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.550 nvme0n1 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.550 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.808 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.809 19:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.809 19:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.809 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.809 19:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.375 nvme0n1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.375 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 nvme0n1 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:52.940 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.941 19:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.509 nvme0n1 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.509 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.768 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.363 nvme0n1 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.363 19:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.363 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.932 nvme0n1 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.932 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.191 nvme0n1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.191 19:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 nvme0n1 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.451 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 nvme0n1 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 nvme0n1 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.711 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 nvme0n1 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.971 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.231 19:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.231 nvme0n1 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.231 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.491 nvme0n1 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.491 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.492 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.751 nvme0n1 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:56.751 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.752 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.020 nvme0n1 00:32:57.020 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.020 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.021 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.320 nvme0n1 00:32:57.320 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.320 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.320 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.320 19:39:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.320 19:39:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.320 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.578 nvme0n1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.578 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.835 nvme0n1 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.835 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.094 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.354 nvme0n1 00:32:58.354 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.354 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.354 19:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.354 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.354 19:39:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.354 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.613 nvme0n1 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.613 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.872 nvme0n1 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.872 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.131 19:39:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 nvme0n1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.390 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.957 nvme0n1 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:32:59.957 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.958 19:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.217 nvme0n1 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.217 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.783 nvme0n1 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.783 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.784 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.042 nvme0n1 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.042 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNhNzM4MWY2MDFhYWY3N2MyMDdiNDIwYTA5MzdhMzW9Rkts: 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: ]] 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZjZTYyNmJiNjgwMTFjODJlNWViZmI4NmM5YjBkZjFkZTc0ZmExYWI3ZGUyNmFhNmQ1NmNkOWNjODM4YjA0NAa/MZs=: 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.300 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.301 19:39:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.868 nvme0n1 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.868 19:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.869 19:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.869 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.869 19:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.437 nvme0n1 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmZlOTUxM2Q4NzVmOWYxMDYwMWM5MGM2MDdjMTdiZTkCoAdW: 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDA1YTk4MWIwYWMzYzA1NzhjY2JjOWQ1NDViMmNlNjDYmLe4: 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.437 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.004 nvme0n1 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWY3Y2Q4NzY2ZmEzZGQzMzQzM2RiODdhNWMwODg2MWJkZTk1YzM0YzYyZTkxYzJjKsJSCg==: 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: ]] 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFmMzA0MTdhZTU5MDY2ZjEwZWY5ZWMzZTJlNWM5ODG7Qt83: 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.004 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.262 19:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.830 nvme0n1 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I5NTQ5YzI3MDMyNDczNzEyMGQwMzIxZTJkMDYzZDc3YTFiZjUyMWQ4ZGZhMzEyOWE1NTA3ZjMyMzkxOWJjZg75wAI=: 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.830 19:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.398 nvme0n1 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ2Yjk1N2JmNmVkYzJiMjk1Zjc1MmZiZGYxOTg2NGMwNzBlNmNiODJhMzBjZGE1iLGTWA==: 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgxNTFkOTZiMGQzMWMxZjg2YmU4NTE2MzgxNzQ4NzZiYTNiNzk3MmQ5YWM3MjBi47Idsw==: 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.398 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.399 request: 00:33:04.399 { 00:33:04.399 "name": "nvme0", 00:33:04.399 "trtype": "tcp", 00:33:04.399 "traddr": "10.0.0.1", 00:33:04.399 "adrfam": "ipv4", 00:33:04.399 "trsvcid": "4420", 00:33:04.399 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.399 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.399 "prchk_reftag": false, 00:33:04.399 "prchk_guard": false, 00:33:04.399 "hdgst": false, 00:33:04.399 "ddgst": false, 00:33:04.399 "method": "bdev_nvme_attach_controller", 00:33:04.399 "req_id": 1 00:33:04.399 } 00:33:04.399 Got JSON-RPC error response 00:33:04.399 response: 00:33:04.399 { 00:33:04.399 "code": -5, 00:33:04.399 "message": "Input/output error" 00:33:04.399 } 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.399 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.658 request: 00:33:04.658 { 00:33:04.658 "name": "nvme0", 00:33:04.658 "trtype": "tcp", 00:33:04.658 "traddr": "10.0.0.1", 00:33:04.658 "adrfam": "ipv4", 00:33:04.658 "trsvcid": "4420", 00:33:04.658 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.658 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.658 "prchk_reftag": false, 00:33:04.658 "prchk_guard": false, 00:33:04.658 "hdgst": false, 00:33:04.658 "ddgst": false, 00:33:04.658 "dhchap_key": "key2", 00:33:04.658 "method": "bdev_nvme_attach_controller", 00:33:04.658 "req_id": 1 00:33:04.658 } 00:33:04.658 Got JSON-RPC error response 00:33:04.658 response: 00:33:04.658 { 00:33:04.658 "code": -5, 00:33:04.658 "message": "Input/output error" 00:33:04.658 } 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.658 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.658 request: 00:33:04.658 { 00:33:04.658 "name": "nvme0", 00:33:04.658 "trtype": "tcp", 00:33:04.658 "traddr": "10.0.0.1", 00:33:04.658 "adrfam": "ipv4", 00:33:04.658 "trsvcid": "4420", 00:33:04.658 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:04.658 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:04.658 "prchk_reftag": false, 00:33:04.659 "prchk_guard": false, 00:33:04.659 "hdgst": false, 00:33:04.659 "ddgst": false, 00:33:04.659 "dhchap_key": "key1", 00:33:04.659 "dhchap_ctrlr_key": "ckey2", 00:33:04.659 "method": "bdev_nvme_attach_controller", 00:33:04.659 "req_id": 1 00:33:04.659 } 00:33:04.659 Got JSON-RPC error response 00:33:04.659 response: 00:33:04.659 { 00:33:04.659 "code": -5, 00:33:04.659 "message": "Input/output error" 00:33:04.659 } 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:04.659 rmmod nvme_tcp 00:33:04.659 rmmod nvme_fabrics 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1815509 ']' 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1815509 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1815509 ']' 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1815509 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:04.659 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1815509 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1815509' 00:33:04.918 killing process with pid 1815509 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1815509 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1815509 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:04.918 19:39:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:07.453 19:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:09.986 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:09.986 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:10.552 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:10.809 19:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.axR /tmp/spdk.key-null.Qwm /tmp/spdk.key-sha256.Bof /tmp/spdk.key-sha384.DQW /tmp/spdk.key-sha512.Jp7 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:10.809 19:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:13.346 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:13.346 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:13.346 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:13.346 00:33:13.346 real 0m48.669s 00:33:13.346 user 0m43.611s 00:33:13.346 sys 0m11.444s 00:33:13.346 19:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:13.346 19:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.346 ************************************ 00:33:13.346 END TEST nvmf_auth_host 00:33:13.346 ************************************ 00:33:13.346 19:39:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:13.346 19:39:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:13.346 19:39:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:13.346 19:39:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:13.346 19:39:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.346 19:39:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.346 ************************************ 00:33:13.346 START TEST nvmf_digest 00:33:13.346 ************************************ 00:33:13.346 19:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:13.606 * Looking for test storage... 00:33:13.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:13.606 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:13.607 19:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:13.607 19:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:18.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.916 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:18.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:18.917 Found net devices under 0000:86:00.0: cvl_0_0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:18.917 Found net devices under 0000:86:00.1: cvl_0_1 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:18.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:33:18.917 00:33:18.917 --- 10.0.0.2 ping statistics --- 00:33:18.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.917 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:33:18.917 00:33:18.917 --- 10.0.0.1 ping statistics --- 00:33:18.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.917 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:18.917 ************************************ 00:33:18.917 START TEST nvmf_digest_clean 00:33:18.917 ************************************ 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1829055 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1829055 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1829055 ']' 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.917 [2024-07-15 19:39:29.619073] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:18.917 [2024-07-15 19:39:29.619116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.917 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.917 [2024-07-15 19:39:29.649098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:18.917 [2024-07-15 19:39:29.677393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.917 [2024-07-15 19:39:29.717653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.917 [2024-07-15 19:39:29.717687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.917 [2024-07-15 19:39:29.717695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.917 [2024-07-15 19:39:29.717701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.917 [2024-07-15 19:39:29.717710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.917 [2024-07-15 19:39:29.717726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:18.917 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.177 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.178 null0 00:33:19.178 [2024-07-15 19:39:29.866421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.178 [2024-07-15 19:39:29.890595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1829079 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1829079 /var/tmp/bperf.sock 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1829079 ']' 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.178 19:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:19.178 [2024-07-15 19:39:29.938772] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:19.178 [2024-07-15 19:39:29.938816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829079 ] 00:33:19.178 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.178 [2024-07-15 19:39:29.965706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:19.178 [2024-07-15 19:39:29.993398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.437 [2024-07-15 19:39:30.038032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.437 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:19.437 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:19.437 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:19.437 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:19.437 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:19.696 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.696 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.956 nvme0n1 00:33:19.956 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:19.956 19:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.956 Running I/O for 2 seconds... 00:33:22.492 00:33:22.492 Latency(us) 00:33:22.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.492 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:22.492 nvme0n1 : 2.00 27065.69 105.73 0.00 0.00 4724.17 2179.78 11055.64 00:33:22.492 =================================================================================================================== 00:33:22.492 Total : 27065.69 105.73 0.00 0.00 4724.17 2179.78 11055.64 00:33:22.493 0 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:22.493 | select(.opcode=="crc32c") 00:33:22.493 | "\(.module_name) \(.executed)"' 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1829079 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1829079 ']' 00:33:22.493 19:39:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1829079 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829079 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829079' 00:33:22.493 killing process with pid 1829079 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1829079 00:33:22.493 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.493 00:33:22.493 Latency(us) 00:33:22.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.493 =================================================================================================================== 00:33:22.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1829079 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1829746 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1829746 /var/tmp/bperf.sock 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1829746 ']' 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:22.493 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.493 [2024-07-15 19:39:33.244009] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:22.493 [2024-07-15 19:39:33.244058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829746 ] 00:33:22.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.493 Zero copy mechanism will not be used. 00:33:22.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.493 [2024-07-15 19:39:33.270380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:22.493 [2024-07-15 19:39:33.297600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.493 [2024-07-15 19:39:33.338696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.750 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:22.750 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:22.750 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.750 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.750 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:23.009 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.009 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.267 nvme0n1 00:33:23.267 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.267 19:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.267 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.267 Zero copy mechanism will not be used. 00:33:23.267 Running I/O for 2 seconds... 00:33:25.799 00:33:25.799 Latency(us) 00:33:25.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.799 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:25.799 nvme0n1 : 2.00 4321.66 540.21 0.00 0.00 3699.05 968.79 9402.99 00:33:25.799 =================================================================================================================== 00:33:25.799 Total : 4321.66 540.21 0.00 0.00 3699.05 968.79 9402.99 00:33:25.799 0 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.799 | select(.opcode=="crc32c") 00:33:25.799 | "\(.module_name) \(.executed)"' 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1829746 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1829746 ']' 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1829746 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829746 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829746' 00:33:25.799 killing process with pid 1829746 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1829746 00:33:25.799 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.799 00:33:25.799 Latency(us) 00:33:25.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.799 =================================================================================================================== 00:33:25.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1829746 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1830232 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1830232 /var/tmp/bperf.sock 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1830232 ']' 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:25.799 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.799 [2024-07-15 19:39:36.522229] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:25.799 [2024-07-15 19:39:36.522279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830232 ] 00:33:25.799 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.799 [2024-07-15 19:39:36.549091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:25.799 [2024-07-15 19:39:36.578221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.799 [2024-07-15 19:39:36.616368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.058 19:39:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.626 nvme0n1 00:33:26.626 19:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.626 19:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.626 Running I/O for 2 seconds... 00:33:28.531 00:33:28.531 Latency(us) 00:33:28.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.531 nvme0n1 : 2.00 26950.27 105.27 0.00 0.00 4741.28 3875.17 8434.20 00:33:28.531 =================================================================================================================== 00:33:28.531 Total : 26950.27 105.27 0.00 0.00 4741.28 3875.17 8434.20 00:33:28.531 0 00:33:28.531 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:28.532 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:28.532 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:28.532 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:28.532 | select(.opcode=="crc32c") 00:33:28.532 | "\(.module_name) \(.executed)"' 00:33:28.532 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1830232 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1830232 ']' 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1830232 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1830232 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1830232' 00:33:28.791 killing process with pid 1830232 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1830232 00:33:28.791 Received shutdown signal, test time was about 2.000000 seconds 00:33:28.791 00:33:28.791 Latency(us) 00:33:28.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.791 =================================================================================================================== 00:33:28.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.791 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1830232 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1830708 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1830708 /var/tmp/bperf.sock 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1830708 ']' 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.050 [2024-07-15 19:39:39.735473] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:29.050 [2024-07-15 19:39:39.735521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830708 ] 00:33:29.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.050 Zero copy mechanism will not be used. 00:33:29.050 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.050 [2024-07-15 19:39:39.761729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:29.050 [2024-07-15 19:39:39.789056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.050 [2024-07-15 19:39:39.825149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.050 19:39:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.309 19:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.309 19:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.568 nvme0n1 00:33:29.568 19:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:29.568 19:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.826 Zero copy mechanism will not be used. 00:33:29.826 Running I/O for 2 seconds... 00:33:31.732 00:33:31.732 Latency(us) 00:33:31.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.732 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:31.732 nvme0n1 : 2.00 5997.89 749.74 0.00 0.00 2663.91 1688.26 15728.64 00:33:31.732 =================================================================================================================== 00:33:31.732 Total : 5997.89 749.74 0.00 0.00 2663.91 1688.26 15728.64 00:33:31.732 0 00:33:31.732 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:31.732 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:31.732 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:31.732 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:31.732 | select(.opcode=="crc32c") 00:33:31.732 | "\(.module_name) \(.executed)"' 00:33:31.732 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1830708 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1830708 ']' 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1830708 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1830708 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1830708' 00:33:31.990 killing process with pid 1830708 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1830708 00:33:31.990 Received shutdown signal, test time was about 2.000000 seconds 00:33:31.990 00:33:31.990 Latency(us) 00:33:31.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.990 =================================================================================================================== 00:33:31.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.990 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1830708 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1829055 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1829055 ']' 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1829055 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829055 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829055' 00:33:32.248 killing process with pid 1829055 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1829055 00:33:32.248 19:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1829055 00:33:32.505 00:33:32.505 real 0m13.538s 00:33:32.505 user 0m25.661s 00:33:32.505 sys 0m4.232s 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.505 ************************************ 00:33:32.505 END TEST nvmf_digest_clean 00:33:32.505 ************************************ 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:32.505 ************************************ 00:33:32.505 START TEST nvmf_digest_error 00:33:32.505 ************************************ 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1831305 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1831305 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1831305 ']' 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.505 [2024-07-15 19:39:43.222353] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:32.505 [2024-07-15 19:39:43.222397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.505 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.505 [2024-07-15 19:39:43.251843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:32.505 [2024-07-15 19:39:43.280628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.505 [2024-07-15 19:39:43.320834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.505 [2024-07-15 19:39:43.320870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.505 [2024-07-15 19:39:43.320881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.505 [2024-07-15 19:39:43.320887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.505 [2024-07-15 19:39:43.320892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.505 [2024-07-15 19:39:43.320907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:32.505 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.762 [2024-07-15 19:39:43.385319] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.762 null0 00:33:32.762 [2024-07-15 19:39:43.468804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.762 [2024-07-15 19:39:43.492971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1831443 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1831443 /var/tmp/bperf.sock 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1831443 ']' 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.762 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.762 [2024-07-15 19:39:43.540904] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:32.762 [2024-07-15 19:39:43.540946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831443 ] 00:33:32.762 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.762 [2024-07-15 19:39:43.567248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:32.762 [2024-07-15 19:39:43.595576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.019 [2024-07-15 19:39:43.636518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.019 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.019 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:33.019 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.019 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.278 19:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.536 nvme0n1 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.536 19:39:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.795 Running I/O for 2 seconds... 00:33:33.795 [2024-07-15 19:39:44.476706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.476739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.476750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.486955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.486979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.486988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.497323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.497345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.497359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.506829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.506858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.515406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.515427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.515435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.525050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.525073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.525081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.535307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.535329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.535337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.544622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.544643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.544651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.555155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.555176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.555185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.564373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.564393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.564402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.572903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.572924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.572932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.583063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.583085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.795 [2024-07-15 19:39:44.583093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.795 [2024-07-15 19:39:44.592696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.795 [2024-07-15 19:39:44.592718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.592726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.796 [2024-07-15 19:39:44.602135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.796 [2024-07-15 19:39:44.602157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.602165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.796 [2024-07-15 19:39:44.611561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.796 [2024-07-15 19:39:44.611582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.611590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.796 [2024-07-15 19:39:44.621019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.796 [2024-07-15 19:39:44.621040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.621049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.796 [2024-07-15 19:39:44.630492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.796 [2024-07-15 19:39:44.630513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.630521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.796 [2024-07-15 19:39:44.639643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:33.796 [2024-07-15 19:39:44.639665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.796 [2024-07-15 19:39:44.639673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.649663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.649686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.649695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.658694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.658716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.658728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.668922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.668945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.668954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.679320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.679343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.679352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.687984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.688007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.688015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.698634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.698664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.708783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.708813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.718530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.718551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.718559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.728058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.728079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.728088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.737477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.737499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.737507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.747306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.747332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.747341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.757795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.757818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.057 [2024-07-15 19:39:44.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.057 [2024-07-15 19:39:44.767332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.057 [2024-07-15 19:39:44.767353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.767362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.776741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.776762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.776771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.786094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.786116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.786124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.797106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.797127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.797136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.806239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.806261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.806269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.816194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.816215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.816223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.825327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.825348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.825357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.835512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.835534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.835543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.845088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.845109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.845117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.853661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.853683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.853691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.863116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.863138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.863146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.873112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.873133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.873141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.881770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.881790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.881799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.891843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.891864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.891872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.902153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.902174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.902183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.058 [2024-07-15 19:39:44.910475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.058 [2024-07-15 19:39:44.910497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.058 [2024-07-15 19:39:44.910508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.921886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.921909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.921917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.930652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.930673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.930681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.940609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.940631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.940639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.951124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.951146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.951154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.960657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.960678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.960686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.968682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.968712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.979419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.979443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.979452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.988513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.988534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.988542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:44.997052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:44.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:44.997081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.007097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.007118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.007126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.017260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.017280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.017289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.025576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.025597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.025606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.034704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.034725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.034733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.044798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.044827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.054659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.054680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.054689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.064625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.064646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.064654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.072939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.072961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.072973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.082629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.082650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.082658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.091744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.091765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.091773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.101014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.101035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.110713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.110742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.119977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.119997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.120006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.129083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.129104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.129112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.139053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.139073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.139081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.148265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.148293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.157326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.157351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.157359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.167147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.167168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.419 [2024-07-15 19:39:45.167176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.419 [2024-07-15 19:39:45.176427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.419 [2024-07-15 19:39:45.176447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.176456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.185366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.185387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.185395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.196790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.196810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.196818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.206039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.206059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.206067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.215246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.215267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.215274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.224951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.224971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.224981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.233747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.233769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.242827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.242848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.253189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.253211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.253220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.262064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.262084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.262093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.420 [2024-07-15 19:39:45.271647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.420 [2024-07-15 19:39:45.271668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.420 [2024-07-15 19:39:45.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.678 [2024-07-15 19:39:45.281349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.678 [2024-07-15 19:39:45.281371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.678 [2024-07-15 19:39:45.281379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.678 [2024-07-15 19:39:45.291880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.678 [2024-07-15 19:39:45.291901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.678 [2024-07-15 19:39:45.291909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.678 [2024-07-15 19:39:45.300880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.678 [2024-07-15 19:39:45.300901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.678 [2024-07-15 19:39:45.300909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.678 [2024-07-15 19:39:45.311024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.678 [2024-07-15 19:39:45.311045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.678 [2024-07-15 19:39:45.311053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.678 [2024-07-15 19:39:45.320526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.678 [2024-07-15 19:39:45.320547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.678 [2024-07-15 19:39:45.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.328858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.328878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.328886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.338675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.338696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.338705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.349097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.349117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.349125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.357239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.357260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.357268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.367555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.367575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.367583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.377890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.377911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.377919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.385701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.385721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.385730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.395834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.395863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.405434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.405458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.405466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.414407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.414427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.414435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.424361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.424382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.424390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.434776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.434797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.434805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.442937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.442957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.442965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.452788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.452809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.452817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.462316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.462336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.462344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.471526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.471548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.471556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.481679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.481711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.490901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.490922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.490930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.500248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.500269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.500277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.510643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.510664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.510672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.518944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.518964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.518972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.679 [2024-07-15 19:39:45.528928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.679 [2024-07-15 19:39:45.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.679 [2024-07-15 19:39:45.528958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.538211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.538240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.548753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.548774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.548783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.558310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.558331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.558340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.567365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.567389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.567397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.577353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.577374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.577382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.586369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.586390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.586398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.596144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.596165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.596173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.604934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.604963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.939 [2024-07-15 19:39:45.614047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.939 [2024-07-15 19:39:45.614068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.939 [2024-07-15 19:39:45.614076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.624817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.624837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.624845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.634330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.634350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.634358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.642760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.642780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.642787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.653545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.653566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.653574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.662840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.662860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.662868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.672033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.672054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.672062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.681580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.681601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.681609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.690390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.690411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.690419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.700051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.700073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.700081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.710892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.710915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.710923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.719260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.719281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.719289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.729658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.729679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.739032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.739053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.739061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.747583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.747612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.757890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.757910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.757918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.767044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.767066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.767075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.777135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.777156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.777165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.940 [2024-07-15 19:39:45.786584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:34.940 [2024-07-15 19:39:45.786605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.940 [2024-07-15 19:39:45.786613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.221 [2024-07-15 19:39:45.796891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.221 [2024-07-15 19:39:45.796912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.221 [2024-07-15 19:39:45.796922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.221 [2024-07-15 19:39:45.806455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.221 [2024-07-15 19:39:45.806476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.221 [2024-07-15 19:39:45.806484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.815657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.815681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.824926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.824947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.824956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.834871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.834892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.834901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.843800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.843820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.853760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.853781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.853789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.862908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.862929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.872357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.872380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.872389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.881876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.881906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.891363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.891387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.891396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.900391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.900413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.900422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.910864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.910887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.910896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.919911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.919936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.919944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.929491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.929514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.929522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.939704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.939734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.948065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.948086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.948094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.958285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.958305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.958313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.968122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.968152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.976760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.976787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.976795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.986556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.986577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.986585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:45.995932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:45.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:45.995961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:46.007600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:46.007621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:46.007629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:46.016563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:46.016584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:46.016592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:46.026442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:46.026465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.222 [2024-07-15 19:39:46.026473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.222 [2024-07-15 19:39:46.034882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.222 [2024-07-15 19:39:46.034902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.223 [2024-07-15 19:39:46.034911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.223 [2024-07-15 19:39:46.045556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.223 [2024-07-15 19:39:46.045577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.223 [2024-07-15 19:39:46.045586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.223 [2024-07-15 19:39:46.054600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.223 [2024-07-15 19:39:46.054620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.223 [2024-07-15 19:39:46.054628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.223 [2024-07-15 19:39:46.063640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.223 [2024-07-15 19:39:46.063662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.223 [2024-07-15 19:39:46.063670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.223 [2024-07-15 19:39:46.073834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.223 [2024-07-15 19:39:46.073856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.223 [2024-07-15 19:39:46.073864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.082978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.083010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.093416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.093437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.102774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.102796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.102804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.112277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.112298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.112307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.123697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.123720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.123728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.133583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.133612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.143284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.143306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.143319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.151849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.151870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.151879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.162492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.162514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.162523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.171438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.171470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.182521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.182543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.182552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.191876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.191897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.191905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.201414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.201435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.201443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.211411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.211431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.211439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.220671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.220691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.220700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.229737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.229764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.229772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.239241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.239262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.482 [2024-07-15 19:39:46.239270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.482 [2024-07-15 19:39:46.249275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.482 [2024-07-15 19:39:46.249296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.249304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.259245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.259266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.259275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.268902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.268923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.268931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.277796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.277825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.288981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.289003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.289011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.298900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.298920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.298929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.307803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.307825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.307833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.317717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.317738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.317746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.483 [2024-07-15 19:39:46.328043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.483 [2024-07-15 19:39:46.328064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.483 [2024-07-15 19:39:46.328072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.742 [2024-07-15 19:39:46.336810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.742 [2024-07-15 19:39:46.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.336841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.347238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.347259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.347267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.356123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.356143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.356151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.365395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.365415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.365423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.376458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.376479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.376488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.386030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.386051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.386060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.395796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.395817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.405380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.405401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.405409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.415640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.415660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.415669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.425018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.425040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.425048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.434936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.434956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.434964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.443787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.453108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.453128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.453136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 [2024-07-15 19:39:46.462320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x156a9b0) 00:33:35.743 [2024-07-15 19:39:46.462341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.743 [2024-07-15 19:39:46.462349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.743 00:33:35.743 Latency(us) 00:33:35.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:35.743 nvme0n1 : 2.00 26516.20 103.58 0.00 0.00 4821.68 2322.25 14588.88 00:33:35.743 =================================================================================================================== 00:33:35.743 Total : 26516.20 103.58 0.00 0.00 4821.68 2322.25 14588.88 00:33:35.743 0 00:33:35.743 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.743 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.743 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.743 | .driver_specific 00:33:35.743 | .nvme_error 00:33:35.743 | .status_code 00:33:35.743 | .command_transient_transport_error' 00:33:35.743 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1831443 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1831443 ']' 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1831443 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831443 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831443' 00:33:36.002 killing process with pid 1831443 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1831443 00:33:36.002 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.002 00:33:36.002 Latency(us) 00:33:36.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.002 =================================================================================================================== 00:33:36.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.002 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1831443 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1831922 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1831922 /var/tmp/bperf.sock 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1831922 ']' 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.262 19:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.262 [2024-07-15 19:39:46.927825] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:36.262 [2024-07-15 19:39:46.927874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831922 ] 00:33:36.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.262 Zero copy mechanism will not be used. 00:33:36.262 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.262 [2024-07-15 19:39:46.954057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:36.262 [2024-07-15 19:39:46.982517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.262 [2024-07-15 19:39:47.021109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.262 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.262 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:36.262 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.262 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.521 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.780 nvme0n1 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:37.040 19:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.040 Zero copy mechanism will not be used. 00:33:37.040 Running I/O for 2 seconds... 00:33:37.040 [2024-07-15 19:39:47.763103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.763137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.763148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.774222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.774254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.784085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.784108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.793890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.793913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.793922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.803937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.803959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.803968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.812173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.812195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.812203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.821180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.821202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.821210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.829825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.829848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.829857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.838085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.838107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.846532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.846554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.846562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.854698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.040 [2024-07-15 19:39:47.854719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.040 [2024-07-15 19:39:47.854728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.040 [2024-07-15 19:39:47.862325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.041 [2024-07-15 19:39:47.862346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-15 19:39:47.862354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.041 [2024-07-15 19:39:47.869351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.041 [2024-07-15 19:39:47.869372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-15 19:39:47.869381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.041 [2024-07-15 19:39:47.877836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.041 [2024-07-15 19:39:47.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-15 19:39:47.877868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.041 [2024-07-15 19:39:47.885391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.041 [2024-07-15 19:39:47.885414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-15 19:39:47.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.041 [2024-07-15 19:39:47.893479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.041 [2024-07-15 19:39:47.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.041 [2024-07-15 19:39:47.893509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.900661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.900684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.900693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.910139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.910161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.910170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.921890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.921913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.921925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.932768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.932799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.942505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.942527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.942536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.953310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.953332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.953341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.963047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.963069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.973357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.984550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.984572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.300 [2024-07-15 19:39:47.993901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.300 [2024-07-15 19:39:47.993923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.300 [2024-07-15 19:39:47.993932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.004911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.004933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.004942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.014595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.014621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.023344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.023375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.033159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.033182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.033191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.042780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.042801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.042810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.053106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.053136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.063439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.063460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.063469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.074125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.074148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.074157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.084278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.084299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.084308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.095135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.095156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.095165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.104514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.104537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.104545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.114149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.114172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.114181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.124565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.124587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.124595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.134296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.134319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.134327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.144216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.144244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.144253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.301 [2024-07-15 19:39:48.153317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.301 [2024-07-15 19:39:48.153339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.301 [2024-07-15 19:39:48.153348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.161793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.161817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.161826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.171934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.171956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.171965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.182099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.182121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.182134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.191366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.191390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.191398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.202980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.203003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.203012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.212851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.212873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.212883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.224656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.224679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.224688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.236071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.236092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.236101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.248142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.248163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.248172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.259650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.259682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.270418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.270439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.270448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.281043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.281067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.281076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.293716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.293739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.293747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.304091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.304112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.304121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.314621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.314643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.314651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.323183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.323206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.561 [2024-07-15 19:39:48.323214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.561 [2024-07-15 19:39:48.331174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.561 [2024-07-15 19:39:48.331197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.331205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.340018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.340041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.347764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.347787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.347796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.355654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.355676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.364453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.364474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.364483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.372180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.372202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.372211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.379910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.379932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.389110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.389132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.389140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.400559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.400581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.400590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.562 [2024-07-15 19:39:48.410243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.562 [2024-07-15 19:39:48.410265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.562 [2024-07-15 19:39:48.410274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.420193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.420216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.420230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.428037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.428061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.428070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.435769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.435804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.435813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.442707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.442729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.442738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.450011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.450034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.450043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.456601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.456624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.456633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.464653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.464677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.464686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.472505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.472527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.472535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.479858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.479881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.479889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.487076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.487107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.493493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.493515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.501409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.501431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-07-15 19:39:48.501440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.822 [2024-07-15 19:39:48.508993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.822 [2024-07-15 19:39:48.509016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.516519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.516550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.523500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.523522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.523531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.530672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.530695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.530704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.538530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.538561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.545540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.545562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.545571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.553103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.553125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.553134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.560895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.560917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.560931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.564442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.564465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.564473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.570875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.570897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.570905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.579541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.579564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.579572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.589339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.589362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.589370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.599469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.599492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.599501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.608516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.608538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.616755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.616778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.616787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.629667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.629690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.629699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.639686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.639709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.639717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.649705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.649728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.649737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.659540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.659563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.659572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-07-15 19:39:48.669297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:37.823 [2024-07-15 19:39:48.669319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.823 [2024-07-15 19:39:48.669328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.679673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.679698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.679707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.690786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.690810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.690819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.701856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.701880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.701889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.712550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.712573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.712582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.723771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.723793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.723806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.734462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.734485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.734494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.744078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.744101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.744110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.753543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.753566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.761251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.761274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.761282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.769452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.769486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.777821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.777845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.777853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.786200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.786223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.786239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.795432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.795455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.795463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.804648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.804680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.814440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.814465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.814474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.825325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.084 [2024-07-15 19:39:48.825350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.084 [2024-07-15 19:39:48.825359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.084 [2024-07-15 19:39:48.835164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.835188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.835197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.845503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.845526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.845534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.854791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.854814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.864301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.873735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.873759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.873767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.883670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.883703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.892955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.892978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.892987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.901760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.901792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.909579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.909603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.918510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.918533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.918541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.926466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.926488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.926497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.085 [2024-07-15 19:39:48.934429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.085 [2024-07-15 19:39:48.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.085 [2024-07-15 19:39:48.934460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.942625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.942648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.942657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.950286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.950309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.950318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.957757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.957779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.957793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.965310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.965331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.965340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.972328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.972349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.972357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.979588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.345 [2024-07-15 19:39:48.979611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.345 [2024-07-15 19:39:48.979619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.345 [2024-07-15 19:39:48.987376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:48.987400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:48.987409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:48.994764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:48.994788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:48.994797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.002586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.002609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.002618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.013595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.013618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.022897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.022921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.022931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.031561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.031589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.031598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.040279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.040302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.040311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.048715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.048738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.048747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.057172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.057194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.057203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.066383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.066415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.075568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.075591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.075600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.084066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.084090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.084099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.092331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.092355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.092364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.102493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.102517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.102525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.114649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.114671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.114679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.124636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.124660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.124668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.134814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.134837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.134846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.144309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.144332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.144341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.153364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.153396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.162883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.162906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.162915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.172108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.172139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.180951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.180975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.180983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.189372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.189395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.346 [2024-07-15 19:39:49.198223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.346 [2024-07-15 19:39:49.198255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.346 [2024-07-15 19:39:49.198264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.207141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.207165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.207174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.216577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.216600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.216609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.226670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.226693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.226702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.237253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.237276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.237284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.246203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.246235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.246244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.255532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.255554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.255564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.267306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.267329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.276579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.276602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.276611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.287886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.287911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.287919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.296702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.296724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.305184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.305207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.305216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.315555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.315579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.315588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.324143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.324166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.324175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.334101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.334124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.334133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.344993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.345016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.345025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.355851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.355874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.355887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.365967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.365989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.365998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.376152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.376176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.376185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.387526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.387549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.387557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.397668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.397691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.397699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.407924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.407948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.419246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.419269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.419278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.428096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.428129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.438961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.438985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.438994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.447710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.447738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.447747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.606 [2024-07-15 19:39:49.453255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.606 [2024-07-15 19:39:49.453277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.606 [2024-07-15 19:39:49.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.462952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.462974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.462983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.472554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.472575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.472585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.482380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.482403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.482413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.491956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.491978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.491987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.501672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.501694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.501703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.512363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.512386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.512394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.521772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.521806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.532151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.532172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.532181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.541114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.541136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.541144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.548853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.548875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.548884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.557267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.557288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.557297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.566584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.566607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.566616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.576579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.576601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.576610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.587103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.587134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.596632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.596653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.596662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.606400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.617688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.628715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.628737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.628745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.638179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.638201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.638209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.647720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.647741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.647750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.657914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.657936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.657945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.667443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.667465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.667473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.676955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.676979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.676988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.686230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.686253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.686262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.695154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.866 [2024-07-15 19:39:49.695181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.866 [2024-07-15 19:39:49.695189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.866 [2024-07-15 19:39:49.703998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.867 [2024-07-15 19:39:49.704020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.867 [2024-07-15 19:39:49.704028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.867 [2024-07-15 19:39:49.713362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:38.867 [2024-07-15 19:39:49.713383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.867 [2024-07-15 19:39:49.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.125 [2024-07-15 19:39:49.721882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:39.125 [2024-07-15 19:39:49.721904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.125 [2024-07-15 19:39:49.721912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.125 [2024-07-15 19:39:49.731587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:39.125 [2024-07-15 19:39:49.731610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.125 [2024-07-15 19:39:49.731619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.125 [2024-07-15 19:39:49.740002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:39.125 [2024-07-15 19:39:49.740025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.126 [2024-07-15 19:39:49.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.126 [2024-07-15 19:39:49.749154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1541d10) 00:33:39.126 [2024-07-15 19:39:49.749176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.126 [2024-07-15 19:39:49.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.126 00:33:39.126 Latency(us) 00:33:39.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.126 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:39.126 nvme0n1 : 2.00 3333.17 416.65 0.00 0.00 4796.97 783.58 13392.14 00:33:39.126 =================================================================================================================== 00:33:39.126 Total : 3333.17 416.65 0.00 0.00 4796.97 783.58 13392.14 00:33:39.126 0 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:39.126 | .driver_specific 00:33:39.126 | .nvme_error 00:33:39.126 | .status_code 00:33:39.126 | .command_transient_transport_error' 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1831922 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1831922 ']' 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1831922 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.126 19:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831922 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831922' 00:33:39.385 killing process with pid 1831922 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1831922 00:33:39.385 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.385 00:33:39.385 Latency(us) 00:33:39.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.385 =================================================================================================================== 00:33:39.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1831922 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1832396 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1832396 /var/tmp/bperf.sock 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1832396 ']' 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.385 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.385 [2024-07-15 19:39:50.211664] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:39.385 [2024-07-15 19:39:50.211710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832396 ] 00:33:39.385 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.385 [2024-07-15 19:39:50.238484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:39.644 [2024-07-15 19:39:50.263038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.644 [2024-07-15 19:39:50.304541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.644 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.644 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:39.644 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.644 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.903 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.162 nvme0n1 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:40.162 19:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.421 Running I/O for 2 seconds... 00:33:40.421 [2024-07-15 19:39:51.077535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ee190 00:33:40.421 [2024-07-15 19:39:51.078408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.078437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.086215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fa7d8 00:33:40.421 [2024-07-15 19:39:51.087073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.087095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.097266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e1710 00:33:40.421 [2024-07-15 19:39:51.098596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.098621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.105790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8618 00:33:40.421 [2024-07-15 19:39:51.106683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.106703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.114888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f96f8 00:33:40.421 [2024-07-15 19:39:51.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.115799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.124442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e95a0 00:33:40.421 [2024-07-15 19:39:51.125452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.125471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.134044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8e88 00:33:40.421 [2024-07-15 19:39:51.135245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.135265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.142707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2d80 00:33:40.421 [2024-07-15 19:39:51.143902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.143921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.150827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e3d08 00:33:40.421 [2024-07-15 19:39:51.151350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.151370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.160077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eb760 00:33:40.421 [2024-07-15 19:39:51.160838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.160857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.169638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ed920 00:33:40.421 [2024-07-15 19:39:51.170606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.170625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:40.421 [2024-07-15 19:39:51.178732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebb98 00:33:40.421 [2024-07-15 19:39:51.179365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.421 [2024-07-15 19:39:51.179385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.188025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f20d8 00:33:40.422 [2024-07-15 19:39:51.189020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.189039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.197217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f57b0 00:33:40.422 [2024-07-15 19:39:51.198171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.198190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.206313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7100 00:33:40.422 [2024-07-15 19:39:51.207265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.207284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.215411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f81e0 00:33:40.422 [2024-07-15 19:39:51.216385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.216404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.224693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f92c0 00:33:40.422 [2024-07-15 19:39:51.225648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.225667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.233796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc560 00:33:40.422 [2024-07-15 19:39:51.234764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.234783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.242894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe2e8 00:33:40.422 [2024-07-15 19:39:51.243870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.243889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.252043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fcdd0 00:33:40.422 [2024-07-15 19:39:51.253016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.253034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.261142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eaef0 00:33:40.422 [2024-07-15 19:39:51.262077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.262096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.422 [2024-07-15 19:39:51.270299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4de8 00:33:40.422 [2024-07-15 19:39:51.271193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.422 [2024-07-15 19:39:51.271212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.279688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb480 00:33:40.682 [2024-07-15 19:39:51.280627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.280646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.288773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eea00 00:33:40.682 [2024-07-15 19:39:51.289738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.289757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.297961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:40.682 [2024-07-15 19:39:51.298929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.298948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.307061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5a90 00:33:40.682 [2024-07-15 19:39:51.308005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.308023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.316153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0350 00:33:40.682 [2024-07-15 19:39:51.317100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.317118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.325301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ef270 00:33:40.682 [2024-07-15 19:39:51.326246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.326265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.334415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2510 00:33:40.682 [2024-07-15 19:39:51.335386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.335408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.343511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f1430 00:33:40.682 [2024-07-15 19:39:51.344491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.344510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.352679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:40.682 [2024-07-15 19:39:51.353630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.353649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.361662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7da8 00:33:40.682 [2024-07-15 19:39:51.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.362656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.370776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8e88 00:33:40.682 [2024-07-15 19:39:51.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.380032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc128 00:33:40.682 [2024-07-15 19:39:51.380986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.389109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df988 00:33:40.682 [2024-07-15 19:39:51.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.390081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.398274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fda78 00:33:40.682 [2024-07-15 19:39:51.399227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.399245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.682 [2024-07-15 19:39:51.407406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebb98 00:33:40.682 [2024-07-15 19:39:51.408371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.682 [2024-07-15 19:39:51.408389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.416495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4140 00:33:40.683 [2024-07-15 19:39:51.417376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.417395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.425651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0bc0 00:33:40.683 [2024-07-15 19:39:51.426602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.426621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.434608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb048 00:33:40.683 [2024-07-15 19:39:51.435557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.435576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.443682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ed4e8 00:33:40.683 [2024-07-15 19:39:51.444635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.444653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.452804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6738 00:33:40.683 [2024-07-15 19:39:51.453780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.461957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5658 00:33:40.683 [2024-07-15 19:39:51.462943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.462962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.471064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eff18 00:33:40.683 [2024-07-15 19:39:51.472039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.472057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.480182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f31b8 00:33:40.683 [2024-07-15 19:39:51.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.481158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.489276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f20d8 00:33:40.683 [2024-07-15 19:39:51.490245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.490263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.498374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f57b0 00:33:40.683 [2024-07-15 19:39:51.499348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.499367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.507528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7100 00:33:40.683 [2024-07-15 19:39:51.508502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.508520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.516629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f81e0 00:33:40.683 [2024-07-15 19:39:51.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.517627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.525757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f92c0 00:33:40.683 [2024-07-15 19:39:51.526711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.683 [2024-07-15 19:39:51.526729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.683 [2024-07-15 19:39:51.534975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc560 00:33:40.944 [2024-07-15 19:39:51.535888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.535911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.544310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe2e8 00:33:40.944 [2024-07-15 19:39:51.545240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.545259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.553434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fcdd0 00:33:40.944 [2024-07-15 19:39:51.554399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.554417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.562552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eaef0 00:33:40.944 [2024-07-15 19:39:51.563515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.563533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.571668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4de8 00:33:40.944 [2024-07-15 19:39:51.572546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.572567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.580788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb480 00:33:40.944 [2024-07-15 19:39:51.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.581777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.589880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eea00 00:33:40.944 [2024-07-15 19:39:51.590841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.590860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.599070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:40.944 [2024-07-15 19:39:51.599944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.599962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.608176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5a90 00:33:40.944 [2024-07-15 19:39:51.609074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.609092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.617309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0350 00:33:40.944 [2024-07-15 19:39:51.618170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.618188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.626572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ef270 00:33:40.944 [2024-07-15 19:39:51.627542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.627561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.635900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2510 00:33:40.944 [2024-07-15 19:39:51.636839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.636858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.645057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f1430 00:33:40.944 [2024-07-15 19:39:51.646035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.646053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.654180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:40.944 [2024-07-15 19:39:51.655140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.655158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.663255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7da8 00:33:40.944 [2024-07-15 19:39:51.664230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.664249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.672439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8e88 00:33:40.944 [2024-07-15 19:39:51.673316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.673335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.681551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc128 00:33:40.944 [2024-07-15 19:39:51.682506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.682525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.690635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df988 00:33:40.944 [2024-07-15 19:39:51.691612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.691632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.699843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fda78 00:33:40.944 [2024-07-15 19:39:51.700846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.700865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.708978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebb98 00:33:40.944 [2024-07-15 19:39:51.709848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.709867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.944 [2024-07-15 19:39:51.718174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4140 00:33:40.944 [2024-07-15 19:39:51.719145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.944 [2024-07-15 19:39:51.719163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.727558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0bc0 00:33:40.945 [2024-07-15 19:39:51.728573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.728593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.736790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb048 00:33:40.945 [2024-07-15 19:39:51.737657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.737676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.745945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ed4e8 00:33:40.945 [2024-07-15 19:39:51.746956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.746974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.755077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6738 00:33:40.945 [2024-07-15 19:39:51.756049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.756068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.764153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5658 00:33:40.945 [2024-07-15 19:39:51.765020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.765039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.773317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eff18 00:33:40.945 [2024-07-15 19:39:51.774178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.774196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.782470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f31b8 00:33:40.945 [2024-07-15 19:39:51.783425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:40.945 [2024-07-15 19:39:51.791585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f20d8 00:33:40.945 [2024-07-15 19:39:51.792494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.945 [2024-07-15 19:39:51.792514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.800981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f57b0 00:33:41.205 [2024-07-15 19:39:51.801893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.801912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.810358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7100 00:33:41.205 [2024-07-15 19:39:51.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.811346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.819528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f81e0 00:33:41.205 [2024-07-15 19:39:51.820483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.820502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.828687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f92c0 00:33:41.205 [2024-07-15 19:39:51.829657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.829676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.837785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc560 00:33:41.205 [2024-07-15 19:39:51.838746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.838765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.846945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe2e8 00:33:41.205 [2024-07-15 19:39:51.847960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.847979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.856059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fcdd0 00:33:41.205 [2024-07-15 19:39:51.857014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.857033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.865156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eaef0 00:33:41.205 [2024-07-15 19:39:51.866113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.866132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.874295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4de8 00:33:41.205 [2024-07-15 19:39:51.875154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.883390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb480 00:33:41.205 [2024-07-15 19:39:51.884256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.205 [2024-07-15 19:39:51.884276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.205 [2024-07-15 19:39:51.892657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eea00 00:33:41.205 [2024-07-15 19:39:51.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.893546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.901805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:41.206 [2024-07-15 19:39:51.902768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.902787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.910944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5a90 00:33:41.206 [2024-07-15 19:39:51.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.911959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.920071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0350 00:33:41.206 [2024-07-15 19:39:51.921070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.921088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.929219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ef270 00:33:41.206 [2024-07-15 19:39:51.930189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.930208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.938367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2510 00:33:41.206 [2024-07-15 19:39:51.939332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.939350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.947523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f1430 00:33:41.206 [2024-07-15 19:39:51.948504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.948523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.956637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:41.206 [2024-07-15 19:39:51.957505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.957523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.965753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7da8 00:33:41.206 [2024-07-15 19:39:51.966627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.966646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.974920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8e88 00:33:41.206 [2024-07-15 19:39:51.975790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.975809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.983983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc128 00:33:41.206 [2024-07-15 19:39:51.984965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.984985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:51.993105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df988 00:33:41.206 [2024-07-15 19:39:51.994085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:51.994104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.002198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fda78 00:33:41.206 [2024-07-15 19:39:52.003171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.003190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.011271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebb98 00:33:41.206 [2024-07-15 19:39:52.012131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.012150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.020429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4140 00:33:41.206 [2024-07-15 19:39:52.021296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.029814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ee5c8 00:33:41.206 [2024-07-15 19:39:52.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.030587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.038016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe720 00:33:41.206 [2024-07-15 19:39:52.038971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.038990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.048119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e12d8 00:33:41.206 [2024-07-15 19:39:52.049202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.049228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:41.206 [2024-07-15 19:39:52.057509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eb328 00:33:41.206 [2024-07-15 19:39:52.058532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.206 [2024-07-15 19:39:52.058551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.066876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fd208 00:33:41.466 [2024-07-15 19:39:52.067965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.067984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.076008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe720 00:33:41.466 [2024-07-15 19:39:52.077101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.085101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc998 00:33:41.466 [2024-07-15 19:39:52.086097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.086117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.094223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f96f8 00:33:41.466 [2024-07-15 19:39:52.095322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.095341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.103382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e27f0 00:33:41.466 [2024-07-15 19:39:52.104470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.104488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.112481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e1710 00:33:41.466 [2024-07-15 19:39:52.113562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.121650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e0630 00:33:41.466 [2024-07-15 19:39:52.122733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.122752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.130812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6fa8 00:33:41.466 [2024-07-15 19:39:52.131944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.131966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.139951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6890 00:33:41.466 [2024-07-15 19:39:52.141030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.466 [2024-07-15 19:39:52.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.466 [2024-07-15 19:39:52.149267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eea00 00:33:41.466 [2024-07-15 19:39:52.150340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.150359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.158342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:41.467 [2024-07-15 19:39:52.159427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.159446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.167458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e5a90 00:33:41.467 [2024-07-15 19:39:52.168537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.168556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.176599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0350 00:33:41.467 [2024-07-15 19:39:52.177678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.177697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.185684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fac10 00:33:41.467 [2024-07-15 19:39:52.186775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.186793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.194838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0ff8 00:33:41.467 [2024-07-15 19:39:52.195932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.195951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.203938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4578 00:33:41.467 [2024-07-15 19:39:52.205038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.205057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.213029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eb760 00:33:41.467 [2024-07-15 19:39:52.214124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.214143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.222182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fd640 00:33:41.467 [2024-07-15 19:39:52.223244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.231366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df550 00:33:41.467 [2024-07-15 19:39:52.232462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.232493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.240485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fbcf0 00:33:41.467 [2024-07-15 19:39:52.241579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.241598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.249622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebfd0 00:33:41.467 [2024-07-15 19:39:52.250744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.250763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.258708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e23b8 00:33:41.467 [2024-07-15 19:39:52.259810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.259828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.267859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e12d8 00:33:41.467 [2024-07-15 19:39:52.268857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.268876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.276994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e01f8 00:33:41.467 [2024-07-15 19:39:52.277985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.278004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.285482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f4b08 00:33:41.467 [2024-07-15 19:39:52.286536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.286555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.296375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190dece0 00:33:41.467 [2024-07-15 19:39:52.297920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.297939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.302799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e3d08 00:33:41.467 [2024-07-15 19:39:52.303479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.303497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:41.467 [2024-07-15 19:39:52.311423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f9f68 00:33:41.467 [2024-07-15 19:39:52.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.467 [2024-07-15 19:39:52.312115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:41.727 [2024-07-15 19:39:52.321264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2510 00:33:41.727 [2024-07-15 19:39:52.322087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.727 [2024-07-15 19:39:52.322105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:41.727 [2024-07-15 19:39:52.332190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fa7d8 00:33:41.727 [2024-07-15 19:39:52.333470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.727 [2024-07-15 19:39:52.333488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.727 [2024-07-15 19:39:52.340286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fe2e8 00:33:41.727 [2024-07-15 19:39:52.340887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.727 [2024-07-15 19:39:52.340906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.349638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6cc8 00:33:41.728 [2024-07-15 19:39:52.350591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.350610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.358645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6020 00:33:41.728 [2024-07-15 19:39:52.359233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.359252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.368200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e95a0 00:33:41.728 [2024-07-15 19:39:52.368908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.368930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.377464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f81e0 00:33:41.728 [2024-07-15 19:39:52.378413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.386560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fda78 00:33:41.728 [2024-07-15 19:39:52.387508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.387527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.396914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec408 00:33:41.728 [2024-07-15 19:39:52.398435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.398454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.403530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6890 00:33:41.728 [2024-07-15 19:39:52.404179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.404197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.412161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f35f0 00:33:41.728 [2024-07-15 19:39:52.412817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.412835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.423062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:41.728 [2024-07-15 19:39:52.424191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.424209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.432555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6890 00:33:41.728 [2024-07-15 19:39:52.433814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.433832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.440664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e01f8 00:33:41.728 [2024-07-15 19:39:52.441254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.441273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.449958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190edd58 00:33:41.728 [2024-07-15 19:39:52.450852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.450871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.459180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f9f68 00:33:41.728 [2024-07-15 19:39:52.460010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.468388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eff18 00:33:41.728 [2024-07-15 19:39:52.469206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.469230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.477470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190de470 00:33:41.728 [2024-07-15 19:39:52.478297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.478316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.486805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f46d0 00:33:41.728 [2024-07-15 19:39:52.487654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.487674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.496241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df988 00:33:41.728 [2024-07-15 19:39:52.497083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.497103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.505557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0bc0 00:33:41.728 [2024-07-15 19:39:52.506406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.506425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.514222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ef6a8 00:33:41.728 [2024-07-15 19:39:52.515124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.524349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:41.728 [2024-07-15 19:39:52.525419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.525437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.532938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190edd58 00:33:41.728 [2024-07-15 19:39:52.533963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.533982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.543175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:41.728 [2024-07-15 19:39:52.544242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.544261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.552314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eee38 00:33:41.728 [2024-07-15 19:39:52.553385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.553404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.561411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2d80 00:33:41.728 [2024-07-15 19:39:52.562479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.562498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.570498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f8618 00:33:41.728 [2024-07-15 19:39:52.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.571586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.728 [2024-07-15 19:39:52.579744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fd640 00:33:41.728 [2024-07-15 19:39:52.580845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.728 [2024-07-15 19:39:52.580864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.589028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f4298 00:33:41.989 [2024-07-15 19:39:52.590097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.590117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.598173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f57b0 00:33:41.989 [2024-07-15 19:39:52.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.599258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.607301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f20d8 00:33:41.989 [2024-07-15 19:39:52.608351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.608373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.616432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f9f68 00:33:41.989 [2024-07-15 19:39:52.617505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.617523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.625568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eff18 00:33:41.989 [2024-07-15 19:39:52.626635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.626654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.634659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190de470 00:33:41.989 [2024-07-15 19:39:52.635728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.635747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.643829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6738 00:33:41.989 [2024-07-15 19:39:52.644896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.644915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.652971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e0ea0 00:33:41.989 [2024-07-15 19:39:52.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.654059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.662313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f92c0 00:33:41.989 [2024-07-15 19:39:52.663399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.663418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.671557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb480 00:33:41.989 [2024-07-15 19:39:52.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.672646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.680649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0350 00:33:41.989 [2024-07-15 19:39:52.681719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.681738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.689772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ed4e8 00:33:41.989 [2024-07-15 19:39:52.690847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.698417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ecc78 00:33:41.989 [2024-07-15 19:39:52.699552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.699571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.706980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ebfd0 00:33:41.989 [2024-07-15 19:39:52.707675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.707694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.716435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f9b30 00:33:41.989 [2024-07-15 19:39:52.717007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.717025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.724785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fac10 00:33:41.989 [2024-07-15 19:39:52.725560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.725577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.734518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f4f40 00:33:41.989 [2024-07-15 19:39:52.735416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.744694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0ff8 00:33:41.989 [2024-07-15 19:39:52.745699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.745718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.753967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:41.989 [2024-07-15 19:39:52.754938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.754957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.763146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7538 00:33:41.989 [2024-07-15 19:39:52.764086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.764105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.771693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb8b8 00:33:41.989 [2024-07-15 19:39:52.772693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.772711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.781686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e1b48 00:33:41.989 [2024-07-15 19:39:52.782814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.782832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.790244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e1710 00:33:41.989 [2024-07-15 19:39:52.791234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.791252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.800563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e27f0 00:33:41.989 [2024-07-15 19:39:52.801920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.808766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df550 00:33:41.989 [2024-07-15 19:39:52.809460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.809480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.818127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eb328 00:33:41.989 [2024-07-15 19:39:52.819137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.819158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.826799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e3060 00:33:41.989 [2024-07-15 19:39:52.827803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.989 [2024-07-15 19:39:52.827822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.989 [2024-07-15 19:39:52.836356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fc560 00:33:41.989 [2024-07-15 19:39:52.837438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.990 [2024-07-15 19:39:52.837458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.846183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6738 00:33:42.250 [2024-07-15 19:39:52.847413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.855807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190de8a8 00:33:42.250 [2024-07-15 19:39:52.857199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.857218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.864352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f5378 00:33:42.250 [2024-07-15 19:39:52.865272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.865291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.873416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e95a0 00:33:42.250 [2024-07-15 19:39:52.874333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.874351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.882755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f1430 00:33:42.250 [2024-07-15 19:39:52.883924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.883943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.891834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fdeb0 00:33:42.250 [2024-07-15 19:39:52.892621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.892640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.902284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f6458 00:33:42.250 [2024-07-15 19:39:52.903868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.903887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.908812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f4b08 00:33:42.250 [2024-07-15 19:39:52.909570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.909589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.918188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ea680 00:33:42.250 [2024-07-15 19:39:52.918864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.927300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f0ff8 00:33:42.250 [2024-07-15 19:39:52.927976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.250 [2024-07-15 19:39:52.927995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.250 [2024-07-15 19:39:52.936488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190df550 00:33:42.250 [2024-07-15 19:39:52.937156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.937176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.945640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ec840 00:33:42.251 [2024-07-15 19:39:52.946313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.946332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.954720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fb048 00:33:42.251 [2024-07-15 19:39:52.955391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.963884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f2510 00:33:42.251 [2024-07-15 19:39:52.964561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.964580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.972993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e6738 00:33:42.251 [2024-07-15 19:39:52.973669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.973688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.982067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f4298 00:33:42.251 [2024-07-15 19:39:52.982740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.982759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:52.991220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ed0b0 00:33:42.251 [2024-07-15 19:39:52.991890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:52.991909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.000303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190ef6a8 00:33:42.251 [2024-07-15 19:39:53.000957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.000975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.009269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190fd208 00:33:42.251 [2024-07-15 19:39:53.009940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.009959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.018428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e9e10 00:33:42.251 [2024-07-15 19:39:53.019093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.019112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.027504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7970 00:33:42.251 [2024-07-15 19:39:53.028171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.028189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.036938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eb760 00:33:42.251 [2024-07-15 19:39:53.037789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.037808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.046201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190eaef0 00:33:42.251 [2024-07-15 19:39:53.047001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.047020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.055297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190f7100 00:33:42.251 [2024-07-15 19:39:53.056087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.056106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.251 [2024-07-15 19:39:53.064485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16f3180) with pdu=0x2000190e4140 00:33:42.251 [2024-07-15 19:39:53.065280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.251 [2024-07-15 19:39:53.065298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.251 00:33:42.251 Latency(us) 00:33:42.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.251 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.251 nvme0n1 : 2.00 27807.96 108.62 0.00 0.00 4597.00 2265.27 13563.10 00:33:42.251 =================================================================================================================== 00:33:42.251 Total : 27807.96 108.62 0.00 0.00 4597.00 2265.27 13563.10 00:33:42.251 0 00:33:42.251 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:42.251 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:42.251 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:42.251 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:42.251 | .driver_specific 00:33:42.251 | .nvme_error 00:33:42.251 | .status_code 00:33:42.251 | .command_transient_transport_error' 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1832396 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1832396 ']' 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1832396 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:42.518 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832396 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832396' 00:33:42.519 killing process with pid 1832396 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1832396 00:33:42.519 Received shutdown signal, test time was about 2.000000 seconds 00:33:42.519 00:33:42.519 Latency(us) 00:33:42.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.519 =================================================================================================================== 00:33:42.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.519 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1832396 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1833041 00:33:42.784 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1833041 /var/tmp/bperf.sock 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1833041 ']' 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:42.785 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.785 [2024-07-15 19:39:53.530863] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:42.785 [2024-07-15 19:39:53.530913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833041 ] 00:33:42.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:42.785 Zero copy mechanism will not be used. 00:33:42.785 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.785 [2024-07-15 19:39:53.557139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:42.785 [2024-07-15 19:39:53.585493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.785 [2024-07-15 19:39:53.626392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.044 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:43.044 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:43.044 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.044 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.303 19:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.563 nvme0n1 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:43.563 19:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.563 Zero copy mechanism will not be used. 00:33:43.563 Running I/O for 2 seconds... 00:33:43.563 [2024-07-15 19:39:54.304999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.305365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.305392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.310415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.310769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.310792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.316384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.316767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.316788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.321816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.322169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.322189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.327873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.328213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.328238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.335430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.335783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.335805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.341944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.342026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.342047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.348066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.348412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.354255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.354612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.354632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.360391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.360763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.366339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.366723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.366747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.371782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.372126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.372147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.563 [2024-07-15 19:39:54.378661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.563 [2024-07-15 19:39:54.379040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.563 [2024-07-15 19:39:54.379060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.564 [2024-07-15 19:39:54.386267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.564 [2024-07-15 19:39:54.386610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.564 [2024-07-15 19:39:54.386631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.564 [2024-07-15 19:39:54.393492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.564 [2024-07-15 19:39:54.393884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.564 [2024-07-15 19:39:54.393904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.564 [2024-07-15 19:39:54.401426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.564 [2024-07-15 19:39:54.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.564 [2024-07-15 19:39:54.401808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.564 [2024-07-15 19:39:54.408105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.564 [2024-07-15 19:39:54.408480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.564 [2024-07-15 19:39:54.408500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.564 [2024-07-15 19:39:54.415066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.564 [2024-07-15 19:39:54.415442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.564 [2024-07-15 19:39:54.415462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.823 [2024-07-15 19:39:54.422527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.823 [2024-07-15 19:39:54.422894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.823 [2024-07-15 19:39:54.422914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.823 [2024-07-15 19:39:54.429920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.430295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.430315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.437953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.438324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.445465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.445836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.445856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.451478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.451827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.451846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.456313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.456686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.456706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.461823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.462234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.462254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.467125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.467514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.467535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.473698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.474082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.474101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.481435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.481811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.489308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.489716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.489736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.495196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.495551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.495571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.499988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.500335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.500355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.504726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.505096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.505116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.510745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.511116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.515858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.516242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.516263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.521906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.522279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.522300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.527932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.528010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.534325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.534732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.534755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.540706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.541057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.541078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.546512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.546630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.546650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.554020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.554424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.554443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.559584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.559922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.559943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.564657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.565021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.565058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.569523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.569871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.569891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.574204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.574562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.574582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.579061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.579403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.579423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.583803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.584162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.584183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.588648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.589005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.589026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.593410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.824 [2024-07-15 19:39:54.593763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.824 [2024-07-15 19:39:54.593784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.824 [2024-07-15 19:39:54.598097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.598471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.602772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.603121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.603142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.607401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.607763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.607783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.612068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.612447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.612467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.616975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.617336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.621631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.621991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.622011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.626351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.626733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.626753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.631301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.631673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.631693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.636537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.636890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.636912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.641213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.641573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.641594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.646043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.646432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.646452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.650827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.651158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.651178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.655594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.655948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.655968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.660373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.660721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.660741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.665012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.665376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.665399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.669726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.670076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.670096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.825 [2024-07-15 19:39:54.674453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:43.825 [2024-07-15 19:39:54.674806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.825 [2024-07-15 19:39:54.674826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.086 [2024-07-15 19:39:54.679248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.086 [2024-07-15 19:39:54.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.086 [2024-07-15 19:39:54.679650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.086 [2024-07-15 19:39:54.684021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.086 [2024-07-15 19:39:54.684381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.086 [2024-07-15 19:39:54.684402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.086 [2024-07-15 19:39:54.688704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.086 [2024-07-15 19:39:54.689058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.086 [2024-07-15 19:39:54.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.086 [2024-07-15 19:39:54.693342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.086 [2024-07-15 19:39:54.693692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.086 [2024-07-15 19:39:54.693712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.086 [2024-07-15 19:39:54.698082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.086 [2024-07-15 19:39:54.698437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.086 [2024-07-15 19:39:54.698457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.702827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.703174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.703193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.707716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.708054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.708074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.712448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.712803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.712823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.717165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.717520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.717540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.721996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.722340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.722361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.727694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.728065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.728085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.732659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.733007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.733028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.737699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.738079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.738100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.742734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.743102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.743122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.747984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.748339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.748364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.752827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.753210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.757632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.757995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.762455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.762814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.762833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.767223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.767582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.767603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.771986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.772359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.772380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.776823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.777202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.781651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.782001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.782021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.786405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.786793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.791246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.791615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.791636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.796014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.796393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.800775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.801136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.801157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.805802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.806159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.806180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.810575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.810938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.810959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.815356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.815719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.815739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.820591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.820945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.820966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.826082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.826450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.826470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.831358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.831723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.831743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.836184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.836538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.836559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.087 [2024-07-15 19:39:54.841042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.087 [2024-07-15 19:39:54.841407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.087 [2024-07-15 19:39:54.841427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.845894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.846248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.846269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.850792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.851150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.851170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.855636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.855997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.856017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.860441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.860821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.860842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.865394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.865748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.865768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.870606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.870962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.870983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.876880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.877240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.883989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.884343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.884363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.890923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.891296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.891317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.899164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.899537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.899557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.907344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.907736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.907756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.915214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.915590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.923285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.923665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.923685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.088 [2024-07-15 19:39:54.932251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.088 [2024-07-15 19:39:54.932629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.088 [2024-07-15 19:39:54.932649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.940904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.941305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.949749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.950136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.957895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.958282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.958302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.966573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.966950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.966969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.974795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.975182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.982974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.983375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.983396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.990919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.991008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.991029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:54.997488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:54.997858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:54.997878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:55.003730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:55.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:55.004111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:55.009021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:55.009407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:55.009427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:55.014057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.349 [2024-07-15 19:39:55.014421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.349 [2024-07-15 19:39:55.014441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.349 [2024-07-15 19:39:55.023747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.024137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.024156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.032151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.032544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.032563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.040149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.040536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.047399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.047793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.047813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.054519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.054881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.054901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.061660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.062037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.062058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.068167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.068546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.068567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.074975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.075361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.075385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.080463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.080850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.080869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.086591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.086998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.087018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.092582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.092947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.092966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.097912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.098275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.098296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.103878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.104273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.110429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.110787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.110806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.118001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.118405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.118424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.124297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.124662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.130433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.130807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.130826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.136813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.137232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.137253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.142677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.142900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.142921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.148434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.148839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.148860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.154681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.155058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.155077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.161726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.162125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.162145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.168286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.168681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.175177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.175551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.182448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.182835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.190800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.191214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.191238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.350 [2024-07-15 19:39:55.201076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.350 [2024-07-15 19:39:55.201484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.350 [2024-07-15 19:39:55.201503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.611 [2024-07-15 19:39:55.209299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.611 [2024-07-15 19:39:55.209672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.611 [2024-07-15 19:39:55.209692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.611 [2024-07-15 19:39:55.216789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.611 [2024-07-15 19:39:55.217159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.611 [2024-07-15 19:39:55.217179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.611 [2024-07-15 19:39:55.222976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.611 [2024-07-15 19:39:55.223365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.611 [2024-07-15 19:39:55.223386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.611 [2024-07-15 19:39:55.231665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.611 [2024-07-15 19:39:55.232051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.611 [2024-07-15 19:39:55.232071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.611 [2024-07-15 19:39:55.242113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.242516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.242537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.251311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.251461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.260049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.260442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.260462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.266607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.266990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.275130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.275534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.275553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.282373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.282723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.282742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.289234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.289667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.296927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.297332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.297352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.304560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.304947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.304966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.312688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.313079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.313097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.322332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.322705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.322725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.330154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.330539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.330558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.337664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.338025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.338045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.346572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.346924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.346944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.356034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.356439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.365442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.365823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.365842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.375746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.375942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.375961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.385443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.385829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.385849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.395374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.395494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.395511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.405074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.405438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.405463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.412453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.412789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.412808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.420162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.420616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.420635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.428522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.428957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.428977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.435999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.436432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.436457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.445143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.445647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.445667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.454533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.454930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.454950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.612 [2024-07-15 19:39:55.463595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.612 [2024-07-15 19:39:55.463937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.612 [2024-07-15 19:39:55.463957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.469028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.469370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.469391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.474246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.474628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.479534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.479858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.479878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.484254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.484590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.484611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.488898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.489232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.489254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.495434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.495933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.495953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.504354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.504752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.504773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.511750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.512166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.512187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.518128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.518477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.518497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.523669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.524165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.524184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.529015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.529351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.529370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.535310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.540288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.540625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.540645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.544939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.545275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.545294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.549711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.550041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.550061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.554385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.554705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.554725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.558938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.559289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.559310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.565256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.565823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.565843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.574699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.575145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.575168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.580989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.581448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.581468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.587678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.588019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.588040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.593928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.594278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.873 [2024-07-15 19:39:55.599651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.873 [2024-07-15 19:39:55.599983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.873 [2024-07-15 19:39:55.600002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.604371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.604713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.609376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.609744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.609764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.614171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.614551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.614571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.619135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.619460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.619479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.623700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.624018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.624037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.628639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.628953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.628973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.633430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.633774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.638687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.638985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.643523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.643832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.643852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.648705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.649012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.649031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.653731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.654046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.654065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.658819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.659131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.659151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.664159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.664500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.664520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.671301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.671725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.671745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.677407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.677717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.677736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.683616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.683926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.683947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.688753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.689061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.689080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.693777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.694094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.694113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.698655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.698985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.704335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.704653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.709319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.709648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.714409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.714725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.714744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.719811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:44.874 [2024-07-15 19:39:55.720121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.874 [2024-07-15 19:39:55.720141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.874 [2024-07-15 19:39:55.725758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.726067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.726089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.731813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.732114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.737296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.737624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.737643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.743267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.743580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.743600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.750368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.750673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.750692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.756197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.756524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.756543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.762075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.762407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.762427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.769445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.769761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.769781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.776147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.776610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.776630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.786316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.786661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.786681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.793758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.794115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.794135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.800720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.801108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.801127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.808973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.809454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.809474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.817515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.817899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.817917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.825819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.826241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.826261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.834397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.834769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.834793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.842569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.843003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.134 [2024-07-15 19:39:55.843023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.134 [2024-07-15 19:39:55.850625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.134 [2024-07-15 19:39:55.851027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.851047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.859172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.859593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.859613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.867572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.867930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.867950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.876250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.876680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.876700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.884926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.885369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.885390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.892927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.893276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.893297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.901697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.902054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.902074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.909508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.909926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.917604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.917936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.917956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.925286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.925674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.925695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.932981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.933342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.933361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.941368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.941743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.941763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.949764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.950154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.950174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.958383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.958669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.958689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.966027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.966359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.966379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.973837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.974220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.974246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.135 [2024-07-15 19:39:55.982103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.135 [2024-07-15 19:39:55.982454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.135 [2024-07-15 19:39:55.982474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:55.990452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:55.990888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:55.990908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:55.998667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:55.999062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:55.999081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.006279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.006644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.014262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.014595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.014614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.020680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.020962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.020981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.027612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.027927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.027945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.034722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.034997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.035016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.042458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.042798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.042822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.050753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.051125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.051144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.058213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.058497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.058517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.065912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.066336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.066356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.074689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.075087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.075106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.082884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.083294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.091327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.091694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.091714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.099782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.100157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.395 [2024-07-15 19:39:56.100177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.395 [2024-07-15 19:39:56.107881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.395 [2024-07-15 19:39:56.108157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.108177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.114722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.115022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.115042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.121663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.121971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.121991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.129101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.129463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.129483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.135061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.135398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.135418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.141754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.142131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.142151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.149232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.149608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.149627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.156671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.157091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.157111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.164309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.164686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.164706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.172198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.172589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.172608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.179652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.179881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.179900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.186875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.187192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.187213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.194112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.194517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.194538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.201540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.201871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.201890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.209464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.209771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.209791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.217506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.217862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.217882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.225431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.225821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.225841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.233581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.233963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.233983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.396 [2024-07-15 19:39:56.241897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.396 [2024-07-15 19:39:56.242270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.396 [2024-07-15 19:39:56.242294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.249820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.250215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.250241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.258197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.258605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.258624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.265282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.265591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.265611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.271929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.272263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.278463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.278730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.278750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.284899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.285237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.285258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.292600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.292986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.678 [2024-07-15 19:39:56.299407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15281f0) with pdu=0x2000190fef90 00:33:45.678 [2024-07-15 19:39:56.299706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.678 [2024-07-15 19:39:56.299725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.678 00:33:45.678 Latency(us) 00:33:45.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.678 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:45.678 nvme0n1 : 2.00 4708.99 588.62 0.00 0.00 3391.92 2179.78 11112.63 00:33:45.678 =================================================================================================================== 00:33:45.678 Total : 4708.99 588.62 0.00 0.00 3391.92 2179.78 11112.63 00:33:45.678 0 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:45.678 | .driver_specific 00:33:45.678 | .nvme_error 00:33:45.678 | .status_code 00:33:45.678 | .command_transient_transport_error' 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 304 > 0 )) 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1833041 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1833041 ']' 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1833041 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.678 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1833041 00:33:45.937 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.937 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.937 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1833041' 00:33:45.937 killing process with pid 1833041 00:33:45.937 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1833041 00:33:45.937 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.937 00:33:45.938 Latency(us) 00:33:45.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.938 =================================================================================================================== 00:33:45.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1833041 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1831305 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1831305 ']' 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1831305 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831305 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831305' 00:33:45.938 killing process with pid 1831305 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1831305 00:33:45.938 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1831305 00:33:46.197 00:33:46.197 real 0m13.779s 00:33:46.197 user 0m26.211s 00:33:46.197 sys 0m4.143s 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.197 ************************************ 00:33:46.197 END TEST nvmf_digest_error 00:33:46.197 ************************************ 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:46.197 19:39:56 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:46.197 rmmod nvme_tcp 00:33:46.197 rmmod nvme_fabrics 00:33:46.197 rmmod nvme_keyring 00:33:46.197 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:46.197 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:46.197 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:46.197 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1831305 ']' 00:33:46.197 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1831305 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1831305 ']' 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1831305 00:33:46.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1831305) - No such process 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1831305 is not found' 00:33:46.198 Process with pid 1831305 is not found 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.198 19:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.772 19:39:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:48.772 00:33:48.772 real 0m34.979s 00:33:48.772 user 0m53.414s 00:33:48.772 sys 0m12.484s 00:33:48.772 19:39:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:48.772 19:39:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.772 ************************************ 00:33:48.772 END TEST nvmf_digest 00:33:48.772 ************************************ 00:33:48.772 19:39:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:48.772 19:39:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:48.772 19:39:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:48.772 19:39:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:48.772 19:39:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:48.772 19:39:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:48.772 19:39:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.772 19:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.772 ************************************ 00:33:48.772 START TEST nvmf_bdevperf 00:33:48.772 ************************************ 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:48.772 * Looking for test storage... 00:33:48.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.772 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:48.773 19:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:54.049 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:54.049 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:54.049 Found net devices under 0000:86:00.0: cvl_0_0 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:54.049 Found net devices under 0000:86:00.1: cvl_0_1 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.049 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:33:54.050 00:33:54.050 --- 10.0.0.2 ping statistics --- 00:33:54.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.050 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:33:54.050 00:33:54.050 --- 10.0.0.1 ping statistics --- 00:33:54.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.050 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1836868 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1836868 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1836868 ']' 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.050 19:40:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.050 [2024-07-15 19:40:04.864064] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:54.050 [2024-07-15 19:40:04.864108] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.050 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.050 [2024-07-15 19:40:04.895928] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:54.310 [2024-07-15 19:40:04.924526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.310 [2024-07-15 19:40:04.966859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.310 [2024-07-15 19:40:04.966900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.310 [2024-07-15 19:40:04.966908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.310 [2024-07-15 19:40:04.966914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.310 [2024-07-15 19:40:04.966919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.310 [2024-07-15 19:40:04.966963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.310 [2024-07-15 19:40:04.967034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.310 [2024-07-15 19:40:04.967035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.310 [2024-07-15 19:40:05.104966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.310 Malloc0 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.310 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.570 [2024-07-15 19:40:05.175882] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:54.570 { 00:33:54.570 "params": { 00:33:54.570 "name": "Nvme$subsystem", 00:33:54.570 "trtype": "$TEST_TRANSPORT", 00:33:54.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.570 "adrfam": "ipv4", 00:33:54.570 "trsvcid": "$NVMF_PORT", 00:33:54.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.570 "hdgst": ${hdgst:-false}, 00:33:54.570 "ddgst": ${ddgst:-false} 00:33:54.570 }, 00:33:54.570 "method": "bdev_nvme_attach_controller" 00:33:54.570 } 00:33:54.570 EOF 00:33:54.570 )") 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:54.570 19:40:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:54.570 "params": { 00:33:54.570 "name": "Nvme1", 00:33:54.570 "trtype": "tcp", 00:33:54.570 "traddr": "10.0.0.2", 00:33:54.570 "adrfam": "ipv4", 00:33:54.570 "trsvcid": "4420", 00:33:54.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.570 "hdgst": false, 00:33:54.570 "ddgst": false 00:33:54.570 }, 00:33:54.570 "method": "bdev_nvme_attach_controller" 00:33:54.570 }' 00:33:54.570 [2024-07-15 19:40:05.224962] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:54.570 [2024-07-15 19:40:05.225004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837081 ] 00:33:54.570 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.570 [2024-07-15 19:40:05.252850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:54.570 [2024-07-15 19:40:05.277935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.571 [2024-07-15 19:40:05.319913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.830 Running I/O for 1 seconds... 00:33:55.768 00:33:55.768 Latency(us) 00:33:55.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.768 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:55.768 Verification LBA range: start 0x0 length 0x4000 00:33:55.768 Nvme1n1 : 1.00 10937.35 42.72 0.00 0.00 11660.43 2165.54 14189.97 00:33:55.768 =================================================================================================================== 00:33:55.768 Total : 10937.35 42.72 0.00 0.00 11660.43 2165.54 14189.97 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1837326 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:56.028 { 00:33:56.028 "params": { 00:33:56.028 "name": "Nvme$subsystem", 00:33:56.028 "trtype": "$TEST_TRANSPORT", 00:33:56.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.028 "adrfam": "ipv4", 00:33:56.028 "trsvcid": "$NVMF_PORT", 00:33:56.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.028 "hdgst": ${hdgst:-false}, 00:33:56.028 "ddgst": ${ddgst:-false} 00:33:56.028 }, 00:33:56.028 "method": "bdev_nvme_attach_controller" 00:33:56.028 } 00:33:56.028 EOF 00:33:56.028 )") 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:56.028 19:40:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:56.028 "params": { 00:33:56.028 "name": "Nvme1", 00:33:56.028 "trtype": "tcp", 00:33:56.028 "traddr": "10.0.0.2", 00:33:56.028 "adrfam": "ipv4", 00:33:56.028 "trsvcid": "4420", 00:33:56.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.028 "hdgst": false, 00:33:56.028 "ddgst": false 00:33:56.028 }, 00:33:56.028 "method": "bdev_nvme_attach_controller" 00:33:56.028 }' 00:33:56.029 [2024-07-15 19:40:06.705739] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:33:56.029 [2024-07-15 19:40:06.705789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837326 ] 00:33:56.029 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.029 [2024-07-15 19:40:06.732674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:56.029 [2024-07-15 19:40:06.761434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.029 [2024-07-15 19:40:06.800741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.288 Running I/O for 15 seconds... 00:33:58.822 19:40:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1836868 00:33:58.822 19:40:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:59.082 [2024-07-15 19:40:09.681007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.083 [2024-07-15 19:40:09.681317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.083 [2024-07-15 19:40:09.681856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.083 [2024-07-15 19:40:09.681863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.681990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.084 [2024-07-15 19:40:09.682303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.084 [2024-07-15 19:40:09.682502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.084 [2024-07-15 19:40:09.682509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.085 [2024-07-15 19:40:09.682667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.682985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.682993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.085 [2024-07-15 19:40:09.683142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.085 [2024-07-15 19:40:09.683149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b2590 is same with the state(5) to be set 00:33:59.085 [2024-07-15 19:40:09.683158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:59.085 [2024-07-15 19:40:09.683163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:59.086 [2024-07-15 19:40:09.683169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:33:59.086 [2024-07-15 19:40:09.683177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.086 [2024-07-15 19:40:09.683220] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26b2590 was disconnected and freed. reset controller. 00:33:59.086 [2024-07-15 19:40:09.686066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.686117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.686761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.686778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.686785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.686962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.687140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.687148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.687155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.689988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.699375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.699855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.699900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.699922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.700523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.700921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.700931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.700937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.703530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.712194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.712670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.712695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.712858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.713022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.713031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.713037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.715634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.725123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.725571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.725588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.725596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.725759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.725922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.725931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.725937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.728538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.738114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.738509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.738553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.738575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.738996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.739160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.739170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.739176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.741769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.750944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.751299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.751315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.751322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.751484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.751647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.751656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.751663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.754257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.763779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.764233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.764250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.764257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.764421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.764588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.764597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.764603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.767199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.776685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.777128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.777172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.777194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.777789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.778173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.778182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.778189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.780869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.789593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.790055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.790099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.790122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.790716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.791147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.791156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.791162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.793754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.802472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.802881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.802923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.802946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.803997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.086 [2024-07-15 19:40:09.804269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.086 [2024-07-15 19:40:09.804280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.086 [2024-07-15 19:40:09.804287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.086 [2024-07-15 19:40:09.806881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.086 [2024-07-15 19:40:09.815390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.086 [2024-07-15 19:40:09.815867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.086 [2024-07-15 19:40:09.815912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.086 [2024-07-15 19:40:09.815936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.086 [2024-07-15 19:40:09.816532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.817094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.817103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.817110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.819796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.828326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.828797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.828841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.828865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.829460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.830045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.830077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.830083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.832677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.841245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.841699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.841742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.841765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.842232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.842397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.842406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.842412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.845003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.854027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.854489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.854539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.854563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.854989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.855154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.855164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.855170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.857767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.866945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.867401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.867444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.867467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.868049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.868213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.868222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.868235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.870892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.879787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.880163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.880179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.880186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.880355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.880519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.880528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.880534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.883123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.892610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.893066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.893082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.893090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.893259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.893426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.893435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.893441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.896032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.905515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.905832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.905848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.905855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.906018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.906182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.906191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.906196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.908792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.918379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.918766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.918782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.918789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.087 [2024-07-15 19:40:09.918952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.087 [2024-07-15 19:40:09.919115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.087 [2024-07-15 19:40:09.919124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.087 [2024-07-15 19:40:09.919130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.087 [2024-07-15 19:40:09.921727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.087 [2024-07-15 19:40:09.931311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.087 [2024-07-15 19:40:09.931780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.087 [2024-07-15 19:40:09.931824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.087 [2024-07-15 19:40:09.931847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.088 [2024-07-15 19:40:09.932325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.088 [2024-07-15 19:40:09.932499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.088 [2024-07-15 19:40:09.932509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.088 [2024-07-15 19:40:09.932515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.346 [2024-07-15 19:40:09.935351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.346 [2024-07-15 19:40:09.944502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.346 [2024-07-15 19:40:09.944938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-15 19:40:09.944955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.346 [2024-07-15 19:40:09.944963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.346 [2024-07-15 19:40:09.945140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.346 [2024-07-15 19:40:09.945324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.346 [2024-07-15 19:40:09.945335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.346 [2024-07-15 19:40:09.945342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.346 [2024-07-15 19:40:09.948175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.346 [2024-07-15 19:40:09.957388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.346 [2024-07-15 19:40:09.957774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-15 19:40:09.957790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.346 [2024-07-15 19:40:09.957797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.346 [2024-07-15 19:40:09.957960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.346 [2024-07-15 19:40:09.958124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.346 [2024-07-15 19:40:09.958133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.346 [2024-07-15 19:40:09.958139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.346 [2024-07-15 19:40:09.960733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.346 [2024-07-15 19:40:09.970227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.346 [2024-07-15 19:40:09.970675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-15 19:40:09.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.346 [2024-07-15 19:40:09.970740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.346 [2024-07-15 19:40:09.971245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.346 [2024-07-15 19:40:09.971411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.346 [2024-07-15 19:40:09.971421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:09.971427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:09.974015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:09.983038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:09.983419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:09.983435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:09.983446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:09.983610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:09.983774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:09.983783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:09.983791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:09.986533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:09.995864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:09.996254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:09.996270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:09.996279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:09.996442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:09.996605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:09.996614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:09.996620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:09.999215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.008926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.009388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.009405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.009413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.009584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.009756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.009766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.009773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.012602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.021996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.022461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.022478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.022486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.022658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.022832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.022845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.022852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.025627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.035831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.036219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.036242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.036250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.036424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.036599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.036609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.036616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.039362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.048828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.049298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.049316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.049323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.049496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.049671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.049681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.049687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.052295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.061849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.062215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.062237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.062246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.062419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.062592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.062602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.062610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.065354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.074913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.075309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.075326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.075334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.075512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.075677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.075686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.075692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.078288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.087813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.088279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.088297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.088305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.088478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.088660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.088669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.088676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.091272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.100603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.100998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.101013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.101021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.101183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.101351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.101361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.101367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.347 [2024-07-15 19:40:10.103955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.347 [2024-07-15 19:40:10.113439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.347 [2024-07-15 19:40:10.113903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-15 19:40:10.113946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.347 [2024-07-15 19:40:10.113968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.347 [2024-07-15 19:40:10.114585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.347 [2024-07-15 19:40:10.114948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.347 [2024-07-15 19:40:10.114957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.347 [2024-07-15 19:40:10.114963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.117556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.126286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.126659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.126678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.126685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.126850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.127015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.127024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.127030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.129627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.139322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.139643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.139661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.139669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.139841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.140015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.140025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.140031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.142776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.152334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.152780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.152797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.152804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.152977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.153151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.153160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.153171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.155793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.165125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.165428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.165444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.165452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.165615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.165778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.165787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.165793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.168386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.177961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.178405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.178421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.178429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.178592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.178754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.178763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.178770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.181369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.348 [2024-07-15 19:40:10.191044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.348 [2024-07-15 19:40:10.191470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-15 19:40:10.191512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.348 [2024-07-15 19:40:10.191535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.348 [2024-07-15 19:40:10.192002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.348 [2024-07-15 19:40:10.192166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.348 [2024-07-15 19:40:10.192175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.348 [2024-07-15 19:40:10.192181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.348 [2024-07-15 19:40:10.194771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.607 [2024-07-15 19:40:10.204061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.607 [2024-07-15 19:40:10.204525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.607 [2024-07-15 19:40:10.204568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.607 [2024-07-15 19:40:10.204590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.607 [2024-07-15 19:40:10.205100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.607 [2024-07-15 19:40:10.205281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.607 [2024-07-15 19:40:10.205291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.205298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.207963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.216996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.217463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.217505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.217527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.218078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.218247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.218257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.218263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.220857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.229832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.230289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.230307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.230314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.230477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.230639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.230649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.230654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.233250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.242743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.243203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.243257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.243281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.243859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.244265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.244274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.244280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.246871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.255595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.256066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.256109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.256131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.256727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.257220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.257233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.257239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.260064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.268594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.269034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.269051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.269059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.269237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.269420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.269429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.269435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.272025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.281505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.281891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.281907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.281914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.282076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.282245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.282255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.282261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.284857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.294368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.294813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.294831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.294838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.295010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.295182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.295191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.608 [2024-07-15 19:40:10.295198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.608 [2024-07-15 19:40:10.297945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.608 [2024-07-15 19:40:10.307447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.608 [2024-07-15 19:40:10.307893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.608 [2024-07-15 19:40:10.307910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.608 [2024-07-15 19:40:10.307917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.608 [2024-07-15 19:40:10.308095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.608 [2024-07-15 19:40:10.308280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.608 [2024-07-15 19:40:10.308290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.308297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.311054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.320456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.320918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.320934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.320942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.321115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.321293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.321303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.321310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.324049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.333442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.333890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.333907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.333917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.334090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.334270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.334280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.334287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.337027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.346417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.346883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.346900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.346907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.347079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.347256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.347266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.347273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.350012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.359464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.359925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.359942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.359951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.360123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.360302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.360312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.360319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.363033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.372522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.372987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.373003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.373011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.373182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.373361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.373373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.373379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.376119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.385507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.385887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.385904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.385911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.386084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.386262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.386273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.386280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.389016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.398566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.398958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.398975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.398983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.399155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.399333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.399343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.399350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.402089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.411634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.412075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.412092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.412099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.412277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.412450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.412459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.412466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.415205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.424608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.425078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.425096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.609 [2024-07-15 19:40:10.425103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.609 [2024-07-15 19:40:10.425280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.609 [2024-07-15 19:40:10.425454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.609 [2024-07-15 19:40:10.425462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.609 [2024-07-15 19:40:10.425469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.609 [2024-07-15 19:40:10.428210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.609 [2024-07-15 19:40:10.437599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.609 [2024-07-15 19:40:10.438082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.609 [2024-07-15 19:40:10.438099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.610 [2024-07-15 19:40:10.438106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.610 [2024-07-15 19:40:10.438287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.610 [2024-07-15 19:40:10.438460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.610 [2024-07-15 19:40:10.438470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.610 [2024-07-15 19:40:10.438476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.610 [2024-07-15 19:40:10.441301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.610 [2024-07-15 19:40:10.450586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.610 [2024-07-15 19:40:10.451034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.610 [2024-07-15 19:40:10.451051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.610 [2024-07-15 19:40:10.451058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.610 [2024-07-15 19:40:10.451237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.610 [2024-07-15 19:40:10.451410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.610 [2024-07-15 19:40:10.451419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.610 [2024-07-15 19:40:10.451426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.610 [2024-07-15 19:40:10.454165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.869 [2024-07-15 19:40:10.463608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.869 [2024-07-15 19:40:10.464051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.869 [2024-07-15 19:40:10.464068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.869 [2024-07-15 19:40:10.464079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.869 [2024-07-15 19:40:10.464258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.869 [2024-07-15 19:40:10.464430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.869 [2024-07-15 19:40:10.464440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.464446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.467262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.476680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.477140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.477157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.477164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.477344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.477517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.477526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.477533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.480273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.489658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.490123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.490140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.490147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.490324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.490496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.490505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.490512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.493248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.502631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.503065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.503081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.503088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.503266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.503439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.503451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.503457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.506196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.515595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.516048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.516065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.516073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.516249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.516422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.516431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.516438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.519176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.528577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.528974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.528991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.528998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.529170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.529372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.529382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.529389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.532129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.541581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.542020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.542037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.542045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.542217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.542396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.542406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.542413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.545152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.554541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.554988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.555011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.555184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.870 [2024-07-15 19:40:10.555364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.870 [2024-07-15 19:40:10.555374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.870 [2024-07-15 19:40:10.555380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.870 [2024-07-15 19:40:10.558121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.870 [2024-07-15 19:40:10.567507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.870 [2024-07-15 19:40:10.567941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.870 [2024-07-15 19:40:10.567958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.870 [2024-07-15 19:40:10.567965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.870 [2024-07-15 19:40:10.568137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.568314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.568324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.568330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.571069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.580453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.580946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.580963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.580970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.581143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.581320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.581330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.581337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.584082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.593466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.593906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.593923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.593930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.594106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.594284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.594294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.594302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.597043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.606441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.606881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.606905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.607077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.607255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.607266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.607272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.610017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.619424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.619748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.619765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.619773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.619944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.620116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.620125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.620132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.622875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.632446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.632889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.632907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.632915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.633086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.633264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.633274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.633284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.636029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.645630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.646023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.646040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.646047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.646231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.646410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.646418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.646425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.649234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.658563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.658947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.658963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.658970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.659133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.659303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.659313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.659319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.661915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.671420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.671731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.671748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.671755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.671919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.672082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.672091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.672098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.674699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.684362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.684736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.684784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.684806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.685253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.685418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.685427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.685433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.688028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.697490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.871 [2024-07-15 19:40:10.697881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.871 [2024-07-15 19:40:10.697898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.871 [2024-07-15 19:40:10.697906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.871 [2024-07-15 19:40:10.698095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.871 [2024-07-15 19:40:10.698274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.871 [2024-07-15 19:40:10.698285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.871 [2024-07-15 19:40:10.698291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.871 [2024-07-15 19:40:10.701048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.871 [2024-07-15 19:40:10.710404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.872 [2024-07-15 19:40:10.710718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.872 [2024-07-15 19:40:10.710735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:33:59.872 [2024-07-15 19:40:10.710743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:33:59.872 [2024-07-15 19:40:10.710907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:33:59.872 [2024-07-15 19:40:10.711071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.872 [2024-07-15 19:40:10.711080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.872 [2024-07-15 19:40:10.711086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.872 [2024-07-15 19:40:10.713685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.131 [2024-07-15 19:40:10.723503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.131 [2024-07-15 19:40:10.723881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.131 [2024-07-15 19:40:10.723900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.131 [2024-07-15 19:40:10.723908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.131 [2024-07-15 19:40:10.724087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.131 [2024-07-15 19:40:10.724277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.131 [2024-07-15 19:40:10.724287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.131 [2024-07-15 19:40:10.724296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.131 [2024-07-15 19:40:10.726989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.131 [2024-07-15 19:40:10.736314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.131 [2024-07-15 19:40:10.736694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.131 [2024-07-15 19:40:10.736711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.131 [2024-07-15 19:40:10.736718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.131 [2024-07-15 19:40:10.736882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.131 [2024-07-15 19:40:10.737045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.131 [2024-07-15 19:40:10.737055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.131 [2024-07-15 19:40:10.737061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.131 [2024-07-15 19:40:10.739657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.131 [2024-07-15 19:40:10.749134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.131 [2024-07-15 19:40:10.749515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.131 [2024-07-15 19:40:10.749559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.131 [2024-07-15 19:40:10.749583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.750145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.750328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.750338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.750343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.752938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.761988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.762370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.762414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.762437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.763028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.763192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.763202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.763209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.765807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.774786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.775218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.775241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.775248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.775411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.775575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.775584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.775590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.778183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.787665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.787982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.787999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.788006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.788170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.788341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.788350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.788357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.790951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.800609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.801115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.801157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.801180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.801648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.801813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.801822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.801828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.804616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.813511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.813909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.813954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.813985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.814477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.814642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.814651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.814672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.818751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.826800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.827277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.827324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.827348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.827928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.828521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.132 [2024-07-15 19:40:10.828551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.132 [2024-07-15 19:40:10.828558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.132 [2024-07-15 19:40:10.831227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.132 [2024-07-15 19:40:10.839597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.132 [2024-07-15 19:40:10.840078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.132 [2024-07-15 19:40:10.840095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.132 [2024-07-15 19:40:10.840102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.132 [2024-07-15 19:40:10.840269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.132 [2024-07-15 19:40:10.840433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.840443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.840449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.843041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.852478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.852839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.852855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.852862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.853025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.853188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.853204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.853211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.855805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.865545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.865864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.865881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.865889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.866062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.866240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.866251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.866257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.869003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.878514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.879033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.879100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.879561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.879726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.879735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.879741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.882336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.891374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.891722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.891739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.891746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.891909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.892072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.892082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.892088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.894686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.904183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.904496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.904512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.904520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.904683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.904847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.904856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.904862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.907459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.917116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.917504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.917521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.917529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.917692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.917856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.917865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.917871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.920466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.133 [2024-07-15 19:40:10.929969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.133 [2024-07-15 19:40:10.930370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.133 [2024-07-15 19:40:10.930414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.133 [2024-07-15 19:40:10.930437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.133 [2024-07-15 19:40:10.930938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.133 [2024-07-15 19:40:10.931102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.133 [2024-07-15 19:40:10.931111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.133 [2024-07-15 19:40:10.931118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.133 [2024-07-15 19:40:10.933715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.134 [2024-07-15 19:40:10.942905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.134 [2024-07-15 19:40:10.943391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.134 [2024-07-15 19:40:10.943435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.134 [2024-07-15 19:40:10.943459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.134 [2024-07-15 19:40:10.943680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.134 [2024-07-15 19:40:10.943845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.134 [2024-07-15 19:40:10.943854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.134 [2024-07-15 19:40:10.943861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.134 [2024-07-15 19:40:10.946668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.134 [2024-07-15 19:40:10.955902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.134 [2024-07-15 19:40:10.956332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.134 [2024-07-15 19:40:10.956349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.134 [2024-07-15 19:40:10.956356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.134 [2024-07-15 19:40:10.956519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.134 [2024-07-15 19:40:10.956682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.134 [2024-07-15 19:40:10.956691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.134 [2024-07-15 19:40:10.956697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.134 [2024-07-15 19:40:10.959292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.134 [2024-07-15 19:40:10.968784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.134 [2024-07-15 19:40:10.969247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.134 [2024-07-15 19:40:10.969290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.134 [2024-07-15 19:40:10.969313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.134 [2024-07-15 19:40:10.969852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.134 [2024-07-15 19:40:10.970016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.134 [2024-07-15 19:40:10.970025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.134 [2024-07-15 19:40:10.970031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.134 [2024-07-15 19:40:10.972625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.134 [2024-07-15 19:40:10.981847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.134 [2024-07-15 19:40:10.982216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.134 [2024-07-15 19:40:10.982237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.134 [2024-07-15 19:40:10.982245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.134 [2024-07-15 19:40:10.982417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.134 [2024-07-15 19:40:10.982590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.134 [2024-07-15 19:40:10.982599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.134 [2024-07-15 19:40:10.982609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:10.985354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:10.994873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:10.995291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:10.995343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.394 [2024-07-15 19:40:10.995365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.394 [2024-07-15 19:40:10.995888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.394 [2024-07-15 19:40:10.996053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.394 [2024-07-15 19:40:10.996063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.394 [2024-07-15 19:40:10.996068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:10.998664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:11.007698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:11.008125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:11.008168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.394 [2024-07-15 19:40:11.008191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.394 [2024-07-15 19:40:11.008709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.394 [2024-07-15 19:40:11.008874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.394 [2024-07-15 19:40:11.008883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.394 [2024-07-15 19:40:11.008889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:11.011485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:11.020532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:11.020892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:11.020909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.394 [2024-07-15 19:40:11.020916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.394 [2024-07-15 19:40:11.021079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.394 [2024-07-15 19:40:11.021247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.394 [2024-07-15 19:40:11.021257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.394 [2024-07-15 19:40:11.021264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:11.023856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:11.033364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:11.033778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:11.033794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.394 [2024-07-15 19:40:11.033801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.394 [2024-07-15 19:40:11.033963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.394 [2024-07-15 19:40:11.034126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.394 [2024-07-15 19:40:11.034136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.394 [2024-07-15 19:40:11.034142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:11.036735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:11.046215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:11.046642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:11.046659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.394 [2024-07-15 19:40:11.046666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.394 [2024-07-15 19:40:11.046829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.394 [2024-07-15 19:40:11.046993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.394 [2024-07-15 19:40:11.047002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.394 [2024-07-15 19:40:11.047008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.394 [2024-07-15 19:40:11.049601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.394 [2024-07-15 19:40:11.059177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.394 [2024-07-15 19:40:11.059608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.394 [2024-07-15 19:40:11.059643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.059668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.060233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.060397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.060406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.060413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.063004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.072148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.072614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.072658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.072681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.073189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.073361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.073370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.073377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.075972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.085009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.085463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.085488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.085651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.085813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.085822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.085829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.088423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.097900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.098295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.098311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.098319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.098482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.098645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.098654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.098661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.101349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.110910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.111414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.111458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.111480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.112000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.112164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.112174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.112183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.114942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.123878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.124344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.124389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.124412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.124992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.125189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.125198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.125205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.127812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.136691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.137120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.137144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.137310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.137473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.137482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.137488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.140081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.149626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.150101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.150130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.150309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.150494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.150504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.150510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.153102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.162552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.163021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.163071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.163095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.163558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.163723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.163733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.163739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.166336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.175374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.175736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.175752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.175759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.175922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.176085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.395 [2024-07-15 19:40:11.176093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.395 [2024-07-15 19:40:11.176099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.395 [2024-07-15 19:40:11.178701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.395 [2024-07-15 19:40:11.188262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.395 [2024-07-15 19:40:11.188735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.395 [2024-07-15 19:40:11.188777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.395 [2024-07-15 19:40:11.188799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.395 [2024-07-15 19:40:11.189361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.395 [2024-07-15 19:40:11.189525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.396 [2024-07-15 19:40:11.189533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.396 [2024-07-15 19:40:11.189540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.396 [2024-07-15 19:40:11.192129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.396 [2024-07-15 19:40:11.201154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.396 [2024-07-15 19:40:11.201527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.396 [2024-07-15 19:40:11.201544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.396 [2024-07-15 19:40:11.201553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.396 [2024-07-15 19:40:11.201715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.396 [2024-07-15 19:40:11.201880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.396 [2024-07-15 19:40:11.201889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.396 [2024-07-15 19:40:11.201895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.396 [2024-07-15 19:40:11.204664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.396 [2024-07-15 19:40:11.214076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.396 [2024-07-15 19:40:11.214519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.396 [2024-07-15 19:40:11.214562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.396 [2024-07-15 19:40:11.214584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.396 [2024-07-15 19:40:11.215163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.396 [2024-07-15 19:40:11.215384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.396 [2024-07-15 19:40:11.215394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.396 [2024-07-15 19:40:11.215400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.396 [2024-07-15 19:40:11.218000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.396 [2024-07-15 19:40:11.226881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.396 [2024-07-15 19:40:11.227343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.396 [2024-07-15 19:40:11.227360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.396 [2024-07-15 19:40:11.227368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.396 [2024-07-15 19:40:11.227531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.396 [2024-07-15 19:40:11.227694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.396 [2024-07-15 19:40:11.227703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.396 [2024-07-15 19:40:11.227709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.396 [2024-07-15 19:40:11.230306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.396 [2024-07-15 19:40:11.239785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.396 [2024-07-15 19:40:11.240194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.396 [2024-07-15 19:40:11.240211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.396 [2024-07-15 19:40:11.240218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.396 [2024-07-15 19:40:11.240388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.396 [2024-07-15 19:40:11.240551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.396 [2024-07-15 19:40:11.240561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.396 [2024-07-15 19:40:11.240567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.396 [2024-07-15 19:40:11.243217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.252775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.253246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.253289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.253311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.253889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.254483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.254509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.254529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.257209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.265663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.266122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.266139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.266147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.266316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.266479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.266488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.266494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.269084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.278569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.279025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.279067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.279089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.279645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.279901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.279914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.279924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.283976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.291883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.292279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.292323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.292354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.292806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.292974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.292984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.292990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.295657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.304797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.305234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.305250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.305258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.305420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.305583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.305591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.305598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.308190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.317675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.657 [2024-07-15 19:40:11.318035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.657 [2024-07-15 19:40:11.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.657 [2024-07-15 19:40:11.318058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.657 [2024-07-15 19:40:11.318220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.657 [2024-07-15 19:40:11.318390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.657 [2024-07-15 19:40:11.318399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.657 [2024-07-15 19:40:11.318405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.657 [2024-07-15 19:40:11.320995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.657 [2024-07-15 19:40:11.330484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.330948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.330991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.331013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.331545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.331710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.331722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.331728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.334320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.343346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.343723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.343740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.343747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.343909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.344073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.344082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.344088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.346684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.356207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.356657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.356674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.356681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.356844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.357007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.357017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.357023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.359727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.369120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.369589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.369626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.369650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.370244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.370827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.370852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.370872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.374942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.382619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.383074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.383091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.383098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.383272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.383439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.383449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.383455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.386116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.395429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.395855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.395872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.395879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.396041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.396205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.396214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.396220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.398817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.408301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.408762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.408804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.408826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.409420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.410013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.410023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.410029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.412620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.421195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.421644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.421687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.421709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.422130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.422300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.422310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.422316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.424908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.434092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.434535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.434578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.434602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.435082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.435253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.435261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.435267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.437857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.446883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.447328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.447345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.447352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.447516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.447679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.447688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.447694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.450292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.459767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.658 [2024-07-15 19:40:11.460292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.658 [2024-07-15 19:40:11.460335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.658 [2024-07-15 19:40:11.460358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.658 [2024-07-15 19:40:11.460936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.658 [2024-07-15 19:40:11.461169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.658 [2024-07-15 19:40:11.461178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.658 [2024-07-15 19:40:11.461204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.658 [2024-07-15 19:40:11.464050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.658 [2024-07-15 19:40:11.472767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.659 [2024-07-15 19:40:11.473218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.659 [2024-07-15 19:40:11.473275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.659 [2024-07-15 19:40:11.473298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.659 [2024-07-15 19:40:11.473877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.659 [2024-07-15 19:40:11.474309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.659 [2024-07-15 19:40:11.474318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.659 [2024-07-15 19:40:11.474324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.659 [2024-07-15 19:40:11.476914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.659 [2024-07-15 19:40:11.485635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.659 [2024-07-15 19:40:11.486074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.659 [2024-07-15 19:40:11.486116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.659 [2024-07-15 19:40:11.486139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.659 [2024-07-15 19:40:11.486733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.659 [2024-07-15 19:40:11.487243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.659 [2024-07-15 19:40:11.487252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.659 [2024-07-15 19:40:11.487259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.659 [2024-07-15 19:40:11.489975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.659 [2024-07-15 19:40:11.498636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.659 [2024-07-15 19:40:11.499045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.659 [2024-07-15 19:40:11.499063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.659 [2024-07-15 19:40:11.499070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.659 [2024-07-15 19:40:11.499240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.659 [2024-07-15 19:40:11.499404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.659 [2024-07-15 19:40:11.499413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.659 [2024-07-15 19:40:11.499419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.659 [2024-07-15 19:40:11.502011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.511598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.512076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.512118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.512142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.512736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.513292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.513301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.513308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.516046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.524434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.524819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.524835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.524842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.525006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.525169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.525178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.525184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.527787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.537273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.537658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.537675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.537683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.537846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.538010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.538019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.538025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.540617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.550145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.550610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.550654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.550677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.551234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.551399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.551408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.551414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.554006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.563034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.563475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.563518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.563541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.564007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.564171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.564180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.564186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.566783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.575960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.576447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.920 [2024-07-15 19:40:11.576490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.920 [2024-07-15 19:40:11.576512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.920 [2024-07-15 19:40:11.577090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.920 [2024-07-15 19:40:11.577575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.920 [2024-07-15 19:40:11.577584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.920 [2024-07-15 19:40:11.577590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.920 [2024-07-15 19:40:11.580179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.920 [2024-07-15 19:40:11.588744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.920 [2024-07-15 19:40:11.589196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.589250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.589272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.589850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.590395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.590404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.590410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.593003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.601567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.602017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.602033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.602040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.602204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.602373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.602383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.602389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.604979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.614459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.614887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.614934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.614957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.615510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.615674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.615682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.615688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.618284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.627331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.627724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.627742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.627750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.627922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.628094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.628103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.628110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.630856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.640504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.640976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.640996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.641004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.641181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.641365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.641376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.641383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.644207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.653562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.653952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.653969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.653977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.654153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.654337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.654347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.654354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.657179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.666700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.667121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.667138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.667146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.667329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.667507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.667516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.667523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.670349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.679859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.680322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.680345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.680352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.680529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.680710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.680718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.680725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.683550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.692842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.693305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.693321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.693328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.693507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.693670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.693680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.693686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.696417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.705885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.706352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.706382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.706389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.706550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.706713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.706721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.706727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.709457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.718849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.719202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.921 [2024-07-15 19:40:11.719219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.921 [2024-07-15 19:40:11.719232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.921 [2024-07-15 19:40:11.719405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.921 [2024-07-15 19:40:11.719578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.921 [2024-07-15 19:40:11.719587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.921 [2024-07-15 19:40:11.719594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.921 [2024-07-15 19:40:11.722539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.921 [2024-07-15 19:40:11.731838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.921 [2024-07-15 19:40:11.732307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.922 [2024-07-15 19:40:11.732325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.922 [2024-07-15 19:40:11.732333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.922 [2024-07-15 19:40:11.732510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.922 [2024-07-15 19:40:11.732674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.922 [2024-07-15 19:40:11.732683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.922 [2024-07-15 19:40:11.732689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.922 [2024-07-15 19:40:11.735417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.922 [2024-07-15 19:40:11.744805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.922 [2024-07-15 19:40:11.745204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.922 [2024-07-15 19:40:11.745221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.922 [2024-07-15 19:40:11.745234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.922 [2024-07-15 19:40:11.745407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.922 [2024-07-15 19:40:11.745578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.922 [2024-07-15 19:40:11.745588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.922 [2024-07-15 19:40:11.745594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.922 [2024-07-15 19:40:11.748339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.922 [2024-07-15 19:40:11.757887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.922 [2024-07-15 19:40:11.758347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.922 [2024-07-15 19:40:11.758364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.922 [2024-07-15 19:40:11.758385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.922 [2024-07-15 19:40:11.758547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.922 [2024-07-15 19:40:11.758710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.922 [2024-07-15 19:40:11.758719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.922 [2024-07-15 19:40:11.758725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.922 [2024-07-15 19:40:11.761457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.922 [2024-07-15 19:40:11.770980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.922 [2024-07-15 19:40:11.771448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.922 [2024-07-15 19:40:11.771466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:00.922 [2024-07-15 19:40:11.771477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:00.922 [2024-07-15 19:40:11.771655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:00.922 [2024-07-15 19:40:11.771834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.922 [2024-07-15 19:40:11.771844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.922 [2024-07-15 19:40:11.771851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.182 [2024-07-15 19:40:11.774724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.182 [2024-07-15 19:40:11.783922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.182 [2024-07-15 19:40:11.784383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.182 [2024-07-15 19:40:11.784405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.182 [2024-07-15 19:40:11.784413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.182 [2024-07-15 19:40:11.784584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.182 [2024-07-15 19:40:11.784756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.182 [2024-07-15 19:40:11.784765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.182 [2024-07-15 19:40:11.784771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.182 [2024-07-15 19:40:11.787515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.182 [2024-07-15 19:40:11.796894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.182 [2024-07-15 19:40:11.797356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.182 [2024-07-15 19:40:11.797373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.182 [2024-07-15 19:40:11.797380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.182 [2024-07-15 19:40:11.797552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.182 [2024-07-15 19:40:11.797725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.182 [2024-07-15 19:40:11.797734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.182 [2024-07-15 19:40:11.797740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.182 [2024-07-15 19:40:11.800486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.182 [2024-07-15 19:40:11.810082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.182 [2024-07-15 19:40:11.810486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.182 [2024-07-15 19:40:11.810504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.182 [2024-07-15 19:40:11.810513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.182 [2024-07-15 19:40:11.810685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.182 [2024-07-15 19:40:11.810858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.182 [2024-07-15 19:40:11.810871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.182 [2024-07-15 19:40:11.810877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.182 [2024-07-15 19:40:11.813623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.182 [2024-07-15 19:40:11.823177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.182 [2024-07-15 19:40:11.823580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.182 [2024-07-15 19:40:11.823597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.182 [2024-07-15 19:40:11.823604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.823778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.823951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.823960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.823967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.826715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.836270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.836713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.836731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.836738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.836911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.837085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.837094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.837101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.839847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.849232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.849671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.849689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.849696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.849868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.850042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.850052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.850059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.852806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.862194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.862666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.862683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.862691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.862863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.863035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.863045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.863051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.865794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.875251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.875716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.875733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.875740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.875912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.876084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.876093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.876100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.878847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.888233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.888712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.888729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.888737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.888914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.889092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.889101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.889108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.891876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.901270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.901708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.901725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.901732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.901907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.902081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.902090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.902096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.904841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.914218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.914680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.914697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.914704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.914876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.915049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.915057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.915063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.917819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.927207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.927602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.927619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.927626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.927802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.927976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.927985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.927992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.930737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.940287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.940751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.940767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.940774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.940946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.941119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.941128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.941138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.943883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.953272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.953712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.953729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.953736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.183 [2024-07-15 19:40:11.953908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.183 [2024-07-15 19:40:11.954081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.183 [2024-07-15 19:40:11.954090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.183 [2024-07-15 19:40:11.954096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.183 [2024-07-15 19:40:11.956843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.183 [2024-07-15 19:40:11.966222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.183 [2024-07-15 19:40:11.966681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.183 [2024-07-15 19:40:11.966697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.183 [2024-07-15 19:40:11.966704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:11.966877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:11.967050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:11.967059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:11.967065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:11.969812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.184 [2024-07-15 19:40:11.979315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.184 [2024-07-15 19:40:11.979783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.184 [2024-07-15 19:40:11.979799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.184 [2024-07-15 19:40:11.979807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:11.979991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:11.980164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:11.980174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:11.980180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:11.982926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.184 [2024-07-15 19:40:11.992317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.184 [2024-07-15 19:40:11.992790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.184 [2024-07-15 19:40:11.992806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.184 [2024-07-15 19:40:11.992814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:11.992985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:11.993156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:11.993164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:11.993171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:11.995931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.184 [2024-07-15 19:40:12.005320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.184 [2024-07-15 19:40:12.005776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.184 [2024-07-15 19:40:12.005793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.184 [2024-07-15 19:40:12.005800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:12.005972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:12.006144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:12.006154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:12.006160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:12.008904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.184 [2024-07-15 19:40:12.018295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.184 [2024-07-15 19:40:12.018731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.184 [2024-07-15 19:40:12.018748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.184 [2024-07-15 19:40:12.018755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:12.018927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:12.019099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:12.019109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:12.019115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:12.021861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.184 [2024-07-15 19:40:12.031297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.184 [2024-07-15 19:40:12.031742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.184 [2024-07-15 19:40:12.031760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.184 [2024-07-15 19:40:12.031767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.184 [2024-07-15 19:40:12.031970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.184 [2024-07-15 19:40:12.032154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.184 [2024-07-15 19:40:12.032164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.184 [2024-07-15 19:40:12.032172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.184 [2024-07-15 19:40:12.035047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.044293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.044756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.044774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.044781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.044953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.045127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.045136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.045142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.047889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.057286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.057670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.057686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.057694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.057866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.058038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.058048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.058055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.060801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.070367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.070824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.070841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.070849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.071021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.071193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.071202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.071208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.073956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.083358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.083686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.083703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.083710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.083882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.084056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.084066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.084072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.086817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.096425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.096895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.096913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.096921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.097098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.097283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.097294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.097301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.100127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.109534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.109989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.110006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.110013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.110184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.110364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.110373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.110380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.113121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.122528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.122953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.122970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.122981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.123153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.123331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.123341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.123348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.126090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.135500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.135870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.135887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.135895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.136067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.136247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.136257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.136263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.139002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.445 [2024-07-15 19:40:12.148565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.445 [2024-07-15 19:40:12.148952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.445 [2024-07-15 19:40:12.148969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.445 [2024-07-15 19:40:12.148977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.445 [2024-07-15 19:40:12.149149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.445 [2024-07-15 19:40:12.149329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.445 [2024-07-15 19:40:12.149339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.445 [2024-07-15 19:40:12.149345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.445 [2024-07-15 19:40:12.152091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.161661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.162028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.162045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.162053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.162231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.162412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.162423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.162429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.165173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.174740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.175063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.175081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.175088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.175266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.175438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.175447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.175454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.178198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.187756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.188148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.188164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.188171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.188350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.188524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.188533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.188539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.191347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.200792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.201237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.201254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.201262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.201448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.201622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.201631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.201637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.204382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.213786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.214256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.214274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.214281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.214453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.214627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.214637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.214644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.217383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.226788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.227195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.227212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.227220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.227399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.227572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.227582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.227588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.230410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.239870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.240328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.240346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.240354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.240526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.240698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.240707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.240714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.243457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.252860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.253254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.253271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.253282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.253463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.253628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.253637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.253643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.256375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.265933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.266415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.266432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.266440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.266603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.266766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.266776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.266782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.269519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.278912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.279378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.279396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.279403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.279575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.279748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.279758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.279764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.282507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.446 [2024-07-15 19:40:12.291908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.446 [2024-07-15 19:40:12.292281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.446 [2024-07-15 19:40:12.292298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.446 [2024-07-15 19:40:12.292305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.446 [2024-07-15 19:40:12.292485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.446 [2024-07-15 19:40:12.292668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.446 [2024-07-15 19:40:12.292680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.446 [2024-07-15 19:40:12.292687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.446 [2024-07-15 19:40:12.295513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.707 [2024-07-15 19:40:12.305002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.707 [2024-07-15 19:40:12.305497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.707 [2024-07-15 19:40:12.305514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.707 [2024-07-15 19:40:12.305522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.707 [2024-07-15 19:40:12.305695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.707 [2024-07-15 19:40:12.305870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.707 [2024-07-15 19:40:12.305880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.707 [2024-07-15 19:40:12.305886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.707 [2024-07-15 19:40:12.308630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.707 [2024-07-15 19:40:12.318021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.707 [2024-07-15 19:40:12.318469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.707 [2024-07-15 19:40:12.318486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.707 [2024-07-15 19:40:12.318493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.707 [2024-07-15 19:40:12.318655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.707 [2024-07-15 19:40:12.318820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.707 [2024-07-15 19:40:12.318829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.707 [2024-07-15 19:40:12.318835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.707 [2024-07-15 19:40:12.321574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.707 [2024-07-15 19:40:12.330980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.707 [2024-07-15 19:40:12.331362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.707 [2024-07-15 19:40:12.331379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.707 [2024-07-15 19:40:12.331387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.707 [2024-07-15 19:40:12.331559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.707 [2024-07-15 19:40:12.331733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.707 [2024-07-15 19:40:12.331742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.707 [2024-07-15 19:40:12.331749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.707 [2024-07-15 19:40:12.334491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.707 [2024-07-15 19:40:12.344053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.707 [2024-07-15 19:40:12.344432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.707 [2024-07-15 19:40:12.344448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.707 [2024-07-15 19:40:12.344455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.707 [2024-07-15 19:40:12.344627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.707 [2024-07-15 19:40:12.344800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.707 [2024-07-15 19:40:12.344809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.344816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.347562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.357119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.357447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.357464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.357472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.357644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.357817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.357827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.357833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.360582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.370147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.370564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.370582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.370590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.370762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.370936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.370946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.370952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.373700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.383096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.383541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.383558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.383565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.383741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.383914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.383924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.383930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.386675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.396077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.396449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.396467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.396474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.396646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.396819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.396829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.396836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.399584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.409139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.409476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.409494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.409501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.409673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.409847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.409856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.409862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.412609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.422179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.422509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.422526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.422533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.422705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.422879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.422888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.422898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.425647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.435212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.435653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.435670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.435678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.435850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.436023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.436032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.436039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.438785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.448182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.448572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.448597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.448770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.448944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.448953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.448959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.451706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.461267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.461642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.461659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.461667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.461839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.462013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.462023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.462030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.464772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.474336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.474657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.474676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.474684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.474857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.475030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.708 [2024-07-15 19:40:12.475040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.708 [2024-07-15 19:40:12.475046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.708 [2024-07-15 19:40:12.477792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.708 [2024-07-15 19:40:12.487461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.708 [2024-07-15 19:40:12.487930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.708 [2024-07-15 19:40:12.487947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.708 [2024-07-15 19:40:12.487955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.708 [2024-07-15 19:40:12.488126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.708 [2024-07-15 19:40:12.488305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.488315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.488322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.491059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.709 [2024-07-15 19:40:12.500453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.709 [2024-07-15 19:40:12.500826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.709 [2024-07-15 19:40:12.500842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.709 [2024-07-15 19:40:12.500849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.709 [2024-07-15 19:40:12.501022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.709 [2024-07-15 19:40:12.501194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.501203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.501210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.503954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.709 [2024-07-15 19:40:12.513503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.709 [2024-07-15 19:40:12.513968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.709 [2024-07-15 19:40:12.513984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.709 [2024-07-15 19:40:12.513992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.709 [2024-07-15 19:40:12.514164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.709 [2024-07-15 19:40:12.514346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.514357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.514363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.517100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.709 [2024-07-15 19:40:12.526497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.709 [2024-07-15 19:40:12.526959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.709 [2024-07-15 19:40:12.526976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.709 [2024-07-15 19:40:12.526983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.709 [2024-07-15 19:40:12.527155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.709 [2024-07-15 19:40:12.527334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.527344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.527350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.530090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.709 [2024-07-15 19:40:12.539473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.709 [2024-07-15 19:40:12.539927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.709 [2024-07-15 19:40:12.539944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.709 [2024-07-15 19:40:12.539952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.709 [2024-07-15 19:40:12.540125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.709 [2024-07-15 19:40:12.540304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.540314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.540321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.543060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.709 [2024-07-15 19:40:12.552444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.709 [2024-07-15 19:40:12.552845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.709 [2024-07-15 19:40:12.552862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.709 [2024-07-15 19:40:12.552869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.709 [2024-07-15 19:40:12.553041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.709 [2024-07-15 19:40:12.553213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.709 [2024-07-15 19:40:12.553223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.709 [2024-07-15 19:40:12.553237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.709 [2024-07-15 19:40:12.556017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.970 [2024-07-15 19:40:12.565476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.970 [2024-07-15 19:40:12.565947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.970 [2024-07-15 19:40:12.565963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.970 [2024-07-15 19:40:12.565971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.970 [2024-07-15 19:40:12.566148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.970 [2024-07-15 19:40:12.566334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.970 [2024-07-15 19:40:12.566345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.970 [2024-07-15 19:40:12.566351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.970 [2024-07-15 19:40:12.569107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.970 [2024-07-15 19:40:12.578496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.970 [2024-07-15 19:40:12.578957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.970 [2024-07-15 19:40:12.578974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.578981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.579153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.579332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.579342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.579349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.582089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.591474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.591869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.591887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.591894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.592066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.592244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.592254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.592261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.594999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.604542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.605003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.605020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.605031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.605203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.605384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.605394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.605401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.608140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.617533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.617971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.617988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.617995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.618167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.618346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.618356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.618362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.621100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.630488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.630950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.630968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.630975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.631147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.631325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.631335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.631341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.634079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.643464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.643907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.643924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.643931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.644102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.644282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.644294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.644300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.647043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.656425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.656887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.656904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.656911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.657083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.657261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.657271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.657277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.660013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1836868 Killed "${NVMF_APP[@]}" "$@" 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.971 [2024-07-15 19:40:12.669448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.669914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.669931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.669939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.670116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.670301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.670311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.670318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.673144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1838255 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1838255 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1838255 ']' 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:01.971 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.971 [2024-07-15 19:40:12.682462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.682906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.682924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.682932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.683103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.683283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.683293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.971 [2024-07-15 19:40:12.683302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.971 [2024-07-15 19:40:12.686044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.971 [2024-07-15 19:40:12.695437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.971 [2024-07-15 19:40:12.695871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.971 [2024-07-15 19:40:12.695887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.971 [2024-07-15 19:40:12.695895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.971 [2024-07-15 19:40:12.696066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.971 [2024-07-15 19:40:12.696245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.971 [2024-07-15 19:40:12.696255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.696261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.698984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.708544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.708942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.708959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.708966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.709139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.709318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.709328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.709336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.712076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.720065] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:34:01.972 [2024-07-15 19:40:12.720112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.972 [2024-07-15 19:40:12.721636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.722106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.722134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.722311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.722486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.722497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.722504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.725254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.734654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.735112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.735129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.735137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.735314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.735487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.735497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.735504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.738329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.747736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.748158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.748176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.748184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.748362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.972 [2024-07-15 19:40:12.748534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.748546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.748554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.751301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.754139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:01.972 [2024-07-15 19:40:12.760727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.761200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.761217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.761229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.761406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.761594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.761604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.761611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.764355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.773800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.774257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.774276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.774283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.774456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.774629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.774638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.774645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.777389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.778470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:01.972 [2024-07-15 19:40:12.786787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.787263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.787283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.787290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.787464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.787638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.787648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.787655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.790400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.799794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.800274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.800298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.800307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.800480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.800654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.800664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.800672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.803620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.812864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.972 [2024-07-15 19:40:12.813345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.972 [2024-07-15 19:40:12.813366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:01.972 [2024-07-15 19:40:12.813375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:01.972 [2024-07-15 19:40:12.813550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:01.972 [2024-07-15 19:40:12.813725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.972 [2024-07-15 19:40:12.813735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.972 [2024-07-15 19:40:12.813742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.972 [2024-07-15 19:40:12.816491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.972 [2024-07-15 19:40:12.820392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.972 [2024-07-15 19:40:12.820423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.972 [2024-07-15 19:40:12.820430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.972 [2024-07-15 19:40:12.820436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.972 [2024-07-15 19:40:12.820441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.972 [2024-07-15 19:40:12.822245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.972 [2024-07-15 19:40:12.822269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.972 [2024-07-15 19:40:12.822271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.233 [2024-07-15 19:40:12.826017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.233 [2024-07-15 19:40:12.826494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.233 [2024-07-15 19:40:12.826514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.233 [2024-07-15 19:40:12.826523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.233 [2024-07-15 19:40:12.826703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.233 [2024-07-15 19:40:12.826883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.826895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.826902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.829748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.839108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.839560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.839581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.839589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.839769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.839948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.839958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.839965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.842793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.852312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.852802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.852822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.852831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.853008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.853186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.853195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.853203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.856033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.865378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.865839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.865859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.865867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.866047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.866232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.866242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.866250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.869075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.878422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.878907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.878934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.878942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.879119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.879302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.879311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.879319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.882138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.891483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.891957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.891974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.891982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.892159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.892341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.892350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.892357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.895179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.904522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.904966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.904983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.904991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.905167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.905351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.905361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.905368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.908190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.917702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.918026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.918043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.918050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.918234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.918420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.918430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.918436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.234 [2024-07-15 19:40:12.921262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.930782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.931236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.931254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.931262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.931440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.931617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.931627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.931634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.934461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 [2024-07-15 19:40:12.943977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.944385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.944403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.944412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.944591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.944770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.234 [2024-07-15 19:40:12.944779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.234 [2024-07-15 19:40:12.944786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.234 [2024-07-15 19:40:12.947616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.234 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.234 [2024-07-15 19:40:12.957133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.234 [2024-07-15 19:40:12.957518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.234 [2024-07-15 19:40:12.957535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.234 [2024-07-15 19:40:12.957546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.234 [2024-07-15 19:40:12.957723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.234 [2024-07-15 19:40:12.957903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.235 [2024-07-15 19:40:12.957912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.235 [2024-07-15 19:40:12.957919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.235 [2024-07-15 19:40:12.960060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.235 [2024-07-15 19:40:12.960745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.235 [2024-07-15 19:40:12.970259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.235 [2024-07-15 19:40:12.970631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.235 [2024-07-15 19:40:12.970648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.235 [2024-07-15 19:40:12.970656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.235 [2024-07-15 19:40:12.970833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.235 [2024-07-15 19:40:12.971012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.235 [2024-07-15 19:40:12.971022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.235 [2024-07-15 19:40:12.971028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.235 [2024-07-15 19:40:12.973856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.235 [2024-07-15 19:40:12.983372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.235 [2024-07-15 19:40:12.983816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.235 [2024-07-15 19:40:12.983834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.235 [2024-07-15 19:40:12.983842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.235 [2024-07-15 19:40:12.984019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.235 [2024-07-15 19:40:12.984197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.235 [2024-07-15 19:40:12.984206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.235 [2024-07-15 19:40:12.984213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.235 [2024-07-15 19:40:12.987040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.235 Malloc0 00:34:02.235 [2024-07-15 19:40:12.996569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.235 [2024-07-15 19:40:12.997016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:02.235 [2024-07-15 19:40:12.997038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.235 [2024-07-15 19:40:12.997047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.235 [2024-07-15 19:40:12.997229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.235 [2024-07-15 19:40:12.997406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.235 [2024-07-15 19:40:12.997417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.235 [2024-07-15 19:40:12.997423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.235 19:40:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.235 [2024-07-15 19:40:13.000248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.235 [2024-07-15 19:40:13.009760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.235 [2024-07-15 19:40:13.010231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.235 [2024-07-15 19:40:13.010248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24800d0 with addr=10.0.0.2, port=4420 00:34:02.235 [2024-07-15 19:40:13.010255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24800d0 is same with the state(5) to be set 00:34:02.235 [2024-07-15 19:40:13.010433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24800d0 (9): Bad file descriptor 00:34:02.235 [2024-07-15 19:40:13.010612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.235 [2024-07-15 19:40:13.010621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.235 [2024-07-15 19:40:13.010628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.235 [2024-07-15 19:40:13.013452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.235 [2024-07-15 19:40:13.019963] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.235 [2024-07-15 19:40:13.022804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.235 19:40:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1837326 00:34:02.235 [2024-07-15 19:40:13.062982] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:12.246 00:34:12.246 Latency(us) 00:34:12.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.246 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:12.246 Verification LBA range: start 0x0 length 0x4000 00:34:12.246 Nvme1n1 : 15.00 8612.38 33.64 10825.24 0.00 6565.54 658.92 23137.06 00:34:12.246 =================================================================================================================== 00:34:12.246 Total : 8612.38 33.64 10825.24 0.00 6565.54 658.92 23137.06 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.246 rmmod nvme_tcp 00:34:12.246 rmmod nvme_fabrics 00:34:12.246 rmmod nvme_keyring 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1838255 ']' 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1838255 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1838255 ']' 00:34:12.246 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1838255 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1838255 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1838255' 00:34:12.247 killing process with pid 1838255 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1838255 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1838255 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.247 19:40:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.151 19:40:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.151 00:34:14.151 real 0m25.491s 00:34:14.151 user 1m0.570s 00:34:14.151 sys 0m6.158s 00:34:14.151 19:40:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.151 19:40:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.151 ************************************ 00:34:14.151 END TEST nvmf_bdevperf 00:34:14.151 ************************************ 00:34:14.151 19:40:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:14.151 19:40:24 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.151 19:40:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:14.151 19:40:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.151 19:40:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.151 ************************************ 00:34:14.151 START TEST nvmf_target_disconnect 00:34:14.151 ************************************ 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.151 * Looking for test storage... 00:34:14.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:14.151 19:40:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.152 19:40:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:19.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:19.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:19.422 Found net devices under 0000:86:00.0: cvl_0_0 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:19.422 Found net devices under 0000:86:00.1: cvl_0_1 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.422 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.423 19:40:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:19.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:34:19.423 00:34:19.423 --- 10.0.0.2 ping statistics --- 00:34:19.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.423 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:34:19.423 00:34:19.423 --- 10.0.0.1 ping statistics --- 00:34:19.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.423 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:19.423 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:19.683 ************************************ 00:34:19.683 START TEST nvmf_target_disconnect_tc1 00:34:19.683 ************************************ 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.683 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.683 [2024-07-15 19:40:30.364438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.683 [2024-07-15 19:40:30.364544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad2ec0 with addr=10.0.0.2, port=4420 00:34:19.683 [2024-07-15 19:40:30.364596] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:19.683 [2024-07-15 19:40:30.364620] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:19.683 [2024-07-15 19:40:30.364639] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:19.683 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:19.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:19.683 Initializing NVMe Controllers 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:19.683 00:34:19.683 real 0m0.095s 00:34:19.683 user 0m0.036s 00:34:19.683 sys 0m0.059s 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:19.683 ************************************ 00:34:19.683 END TEST nvmf_target_disconnect_tc1 00:34:19.683 ************************************ 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:19.683 ************************************ 00:34:19.683 START TEST nvmf_target_disconnect_tc2 00:34:19.683 ************************************ 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1843189 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1843189 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1843189 ']' 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:19.683 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.683 [2024-07-15 19:40:30.480301] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:34:19.683 [2024-07-15 19:40:30.480344] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.683 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.683 [2024-07-15 19:40:30.510690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:19.943 [2024-07-15 19:40:30.550385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.943 [2024-07-15 19:40:30.592110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.943 [2024-07-15 19:40:30.592147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.943 [2024-07-15 19:40:30.592155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.943 [2024-07-15 19:40:30.592161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.943 [2024-07-15 19:40:30.592167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.943 [2024-07-15 19:40:30.592276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:19.943 [2024-07-15 19:40:30.592300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:19.943 [2024-07-15 19:40:30.592387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:19.943 [2024-07-15 19:40:30.592389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 Malloc0 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 [2024-07-15 19:40:30.744124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 [2024-07-15 19:40:30.772359] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1843220 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:19.943 19:40:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.202 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.119 19:40:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1843189 00:34:22.119 19:40:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 [2024-07-15 19:40:32.798737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 [2024-07-15 19:40:32.798938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Read completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.119 Write completed with error (sct=0, sc=8) 00:34:22.119 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 [2024-07-15 19:40:32.799135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Write completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 Read completed with error (sct=0, sc=8) 00:34:22.120 starting I/O failed 00:34:22.120 [2024-07-15 19:40:32.799332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.120 [2024-07-15 19:40:32.799485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.799506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.799714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.799745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.800000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.800031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.800432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.800445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.800576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.800599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.800733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.800744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.800888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.800919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.801080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.801110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.801338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.801370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.801583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.801594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.801713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.801725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.802013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.802044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.802299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.802311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.802420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.802432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.802546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.802557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.802769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.802799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.803080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.803113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.803274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.803306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.803493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.803524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.803709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.803739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.804272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.804309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.804523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.804788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.804822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.805047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.805078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.805373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.805405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.805573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.805604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.805789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.805820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.806166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.120 [2024-07-15 19:40:32.806206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.120 qpair failed and we were unable to recover it. 00:34:22.120 [2024-07-15 19:40:32.806436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.806577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.806589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.806790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.806802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.807046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.807077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.807396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.807428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.807595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.807626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.807799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.807829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.808071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.808084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.808338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.808384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.808684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.808715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.808941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.808972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.809220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.809261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.809425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.809456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.809719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.809769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.810045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.810069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.810276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.810292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.810558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.810573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.810768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.810782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.811070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.811085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.811209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.811229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.811471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.811486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.811602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.811617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.811820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.811836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.812032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.812046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.812179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.812193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.812443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.812476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.812641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.812681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.812836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.812868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.813165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.813197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.813445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.813482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.813659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.813690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.814003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.814034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.814280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.814312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.814561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.814592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.814818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.814849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.815129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.815160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.815444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.815475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.815737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.815768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.815990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.816020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.816259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.816271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.121 qpair failed and we were unable to recover it. 00:34:22.121 [2024-07-15 19:40:32.816414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.121 [2024-07-15 19:40:32.816426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.816611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.816643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.816935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.816966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.817259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.817511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.817546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.817734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.817766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.817986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.818017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.818239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.818269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.818472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.818483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.818762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.819090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.819121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.819394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.819427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.819728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.819759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.820040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.820071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.820390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.820422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.820675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.820706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.821002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.821042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.821238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.821250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.821432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.821462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.821687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.821718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.821963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.821995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.822205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.822216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.822420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.822451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.822679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.822709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.822963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.822993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.823260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.823293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.823457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.823493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.823668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.823699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.823983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.824014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.824289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.824321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.824630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.824661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.824945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.824976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.825295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.825328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.825608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.825638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.825881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.825912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.826134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.826165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.826404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.826436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.826663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.826693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.826918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.826950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.827155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.827166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.122 qpair failed and we were unable to recover it. 00:34:22.122 [2024-07-15 19:40:32.827307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.122 [2024-07-15 19:40:32.827340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.827637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.827668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.827974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.828005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.828156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.828188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.828411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.828423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.828608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.828619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.828855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.828886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.829110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.829141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.829348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.829360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.829614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.829645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.829922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.829953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.830267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.830279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.830465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.830621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.830652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.830942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.830973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.831265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.831298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.831584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.831615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.831846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.831877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.832173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.832203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.832463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.832506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.832706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.832718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.832884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.832896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.833167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.833198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.833448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.833480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.833763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.833774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.834067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.834097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.834386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.834423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.834577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.834607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.834770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.834800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.835025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.123 [2024-07-15 19:40:32.835056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.123 qpair failed and we were unable to recover it. 00:34:22.123 [2024-07-15 19:40:32.835287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.835298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.835536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.835567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.835746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.835777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.835928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.835958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.836172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.836204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.836424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.836455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.836730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.836760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.837081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.837111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.837260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.837303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.837504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.837515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.837702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.837713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.837912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.837943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.838175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.838206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.838438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.838449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.838631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.838642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.838883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.838895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.839066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.839078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.839259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.839291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.839505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.839536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.839814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.839846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.840003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.840038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.840242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.840254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.840427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.840438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.840626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.840657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.840818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.840849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.841066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.841097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.841292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.841304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.841560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.841591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.841893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.841924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.842175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.842206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.842450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.842481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.842832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.842862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.843086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.843116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.843419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.843451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.843661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.843692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.844012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.844042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.844261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.844298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.844570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.844582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.844689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.844699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.844941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.844972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.124 qpair failed and we were unable to recover it. 00:34:22.124 [2024-07-15 19:40:32.845258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.124 [2024-07-15 19:40:32.845298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.845505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.845516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.845728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.845740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.846078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.846110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.846339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.846372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.846583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.846614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.846771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.846802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.847126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.847157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.847466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.847499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.847797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.847828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.848016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.848047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.848365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.848398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.848616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.848627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.848766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.848798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.849040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.849070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.849366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.849406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.849684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.849695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.849972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.850003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.850215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.850257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.850551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.850583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.850816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.850847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.851148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.851179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.851514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.851546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.851852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.851884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.852131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.852162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.852375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.852387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.852595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.852626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.852924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.852955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.853280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.853304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.853489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.853501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.853764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.853795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.854083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.854114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.854335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.854367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.854642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.854672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.854978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.855009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.855222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.855262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.855550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.855580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.855887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.855919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.856148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.856178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.856422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.856435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.125 [2024-07-15 19:40:32.856625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.125 [2024-07-15 19:40:32.856655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.125 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.856867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.856897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.857271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.857304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.857599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.857630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.857933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.857964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.858183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.858196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.858404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.858437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.858714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.858745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.858973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.859004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.859304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.859336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.859618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.859630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.859899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.859911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.860170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.860182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.860369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.860381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.860570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.860601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.860765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.860797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.861024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.861056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.861279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.861292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.861480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.861511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.861811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.861843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.862142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.862173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.862432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.862465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.862782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.862794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.863008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.863044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.863285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.863318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.863532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.863562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.863768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.863799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.864100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.864131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.864428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.864460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.864621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.864653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.864814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.864845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.865124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.865158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.865289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.865303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.865404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.865728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.865759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.866065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.866096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.866392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.866425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.126 [2024-07-15 19:40:32.866667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.126 [2024-07-15 19:40:32.866698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.126 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.866990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.867022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.867331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.867365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.867664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.867696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.867995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.868026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.868257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.868289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.868588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.868620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.868848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.868879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.869163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.869195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.869420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.869452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.869758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.869789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.870097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.870129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.870437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.870469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.870762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.870793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.871072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.871103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.871265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.871298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.871603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.871635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.871809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.871840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.872010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.872042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.872247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.872259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.872526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.872558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.872856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.872888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.873131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.873161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.873443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.873476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.873776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.873807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.874044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.874078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.874289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.874303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.874563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.874575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.874860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.874891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.875105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.875143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.875317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.875329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.875523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.875535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.875841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.875873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.876084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.876115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.876343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.876376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.876680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.876711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.877013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.877044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.877207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.877247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.877532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.877564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.127 qpair failed and we were unable to recover it. 00:34:22.127 [2024-07-15 19:40:32.877844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.127 [2024-07-15 19:40:32.877868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.878144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.878177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.878499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.878532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.878835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.878866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.879165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.879380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.879393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.879591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.879622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.879841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.879872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.880177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.880220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.880496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.880508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.880700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.880713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.880896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.880927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.881205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.881278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.881563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.881594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.881884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.881915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.882152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.882183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.882480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.882513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.882758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.883060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.883072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.883387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.883400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.883697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.883728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.883956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.883988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.884151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.884182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.884449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.884462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.884751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.884782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.885090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.885121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.885434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.885446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.885641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.885677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.885974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.886006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.128 [2024-07-15 19:40:32.886310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.128 [2024-07-15 19:40:32.886343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.128 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.886637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.886668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.886977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.887008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.887236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.887268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.887499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.887530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.887775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.888027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.888058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.888344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.888357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.888598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.888629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.888778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.888809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.889140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.889171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.889438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.889449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.889745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.889777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.890050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.890082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.890411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.890444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.890747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.890778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.891081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.891112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.891434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.891466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.891768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.891801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.892100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.892131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.892354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.892367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.892625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.892656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.892950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.892981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.893262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.893629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.893661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.893974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.894006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.894296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.894330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.894647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.894678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.894913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.894944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.895253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.895285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.895513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.895545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.895898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.895929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.896208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.896250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.896549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.896560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.896733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.896745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.896928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.896960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.897245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.897278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.897587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.897619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.897777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.897813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.898096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.898127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.129 [2024-07-15 19:40:32.898434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.129 [2024-07-15 19:40:32.898468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.129 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.898791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.898822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.899127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.899158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.899462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.899495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.899741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.899772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.900008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.900040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.900389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.900421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.900645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.900657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.900926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.900957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.901170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.901202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.901526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.901560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.901734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.901765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.902100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.902132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.902388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.902421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.902755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.902786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.903003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.903035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.903333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.903366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.903619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.903651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.903957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.903988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.904294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.904327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.904645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.904657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.904793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.904806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.905012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.905043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.905283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.905316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.905531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.905543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.905812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.905844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.906092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.906276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.906307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.906591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.906603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.906796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.906808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.907047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.907059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.907265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.907278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.907581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.907612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.907924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.907955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.908183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.908214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.908453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.908485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.908796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.908809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.130 [2024-07-15 19:40:32.909095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.130 [2024-07-15 19:40:32.909107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.130 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.909397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.909435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.909670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.909702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.909950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.909981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.910292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.910325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.910612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.910641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.910893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.910925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.911141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.911173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.911499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.911532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.911857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.911888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.912201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.912241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.912499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.912531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.912834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.912845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.913115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.913128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.913273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.913285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.913499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.913860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.913891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.914176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.914207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.914534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.914566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.914848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.914880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.915205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.915254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.915498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.915511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.915802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.915834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.916089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.916121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.916300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.916333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.916588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.916620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.916865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.916878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.917095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.917108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.917394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.917427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.917710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.917745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.917964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.917995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.918238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.918271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.918509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.918522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.918789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.918821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.919131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.919163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.919472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.919801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.919832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.920071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.920103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.920411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.920444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.920662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.920675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.131 [2024-07-15 19:40:32.920855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.131 [2024-07-15 19:40:32.920888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.131 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.921107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.921144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.921433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.921467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.921753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.921785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.922011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.922042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.922350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.922383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.922693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.922725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.923019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.923051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.923338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.923371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.923658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.923690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.923944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.923976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.924290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.924323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.924619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.924650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.924959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.924991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.925314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.925347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.925615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.925628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.925873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.925903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.926197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.926251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.926474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.926507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.926799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.926830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.927143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.927175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.927491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.927524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.927816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.927847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.928064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.928095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.928390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.928424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.928735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.928767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.929064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.929097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.929398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.929412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.929609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.929623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.929885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.929898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.930195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.930235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.930456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.930488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.930818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.930850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.931174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.132 [2024-07-15 19:40:32.931205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.132 qpair failed and we were unable to recover it. 00:34:22.132 [2024-07-15 19:40:32.931527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.931560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.931773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.931786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.932040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.932085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.932323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.932357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.932588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.932619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.932832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.933041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.933073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.933313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.933353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.933592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.933623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.933987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.934019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.934335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.934369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.934658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.934690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.934921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.934953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.935267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.935299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.935615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.935628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.935933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.935965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.936193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.936236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.936498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.936530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.936837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.936849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.937167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.937199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.937447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.937480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.937824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.937856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.938090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.938123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.938364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.938397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.938628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.938640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.938908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.938921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.939028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.939041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.939275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.939308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.939626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.939658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.939837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.939852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.940135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.940167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.940498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.940531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.940822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.940854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.941171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.941203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.941539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.941574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.941824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.941837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.942111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.942153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.133 qpair failed and we were unable to recover it. 00:34:22.133 [2024-07-15 19:40:32.942497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.133 [2024-07-15 19:40:32.942530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.942840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.942872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.943173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.943204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.943515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.943548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.943804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.943836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.944063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.944095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.944359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.944405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.944695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.944728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.944955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.944988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.945215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.945272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.945558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.945595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.945882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.945895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.946095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.946127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.946451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.946484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.946767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.947054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.947087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.947379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.947411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.947736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.947768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.948007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.948038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.948278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.948310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.948620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.948632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.948828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.948841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.949025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.949058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.949389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.949422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.949717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.949750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.950062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.950094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.950335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.950371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.950670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.950701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.950996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.951028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.951341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.951375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.951666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.951698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.952012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.952045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.952422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.952454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.952673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.952705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.952926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.952959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.953214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.953255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.953505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.953537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.953832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.953865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.134 [2024-07-15 19:40:32.954178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.134 [2024-07-15 19:40:32.954210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.134 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.954535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.954567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.954839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.954871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.955208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.955265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.955578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.955611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.955896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.955909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.956209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.956253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.956409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.956441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.956684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.956716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.956939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.956972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.957258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.957293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.957626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.957658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.957966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.957982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.958245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.958259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.958537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.958551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.958797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.958811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.959060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.959073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.959376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.959410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.959663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.959695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.959955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.959987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.960304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.960338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.960630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.960662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.960915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.960927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.961182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.961196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.961469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.961504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.961768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.961810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.962124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.962156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.962393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.962426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.962715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.962747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.963062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.963094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.963269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.963303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.135 [2024-07-15 19:40:32.963528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.135 [2024-07-15 19:40:32.963541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.135 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.963787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.963802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.964054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.964088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.964357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.964401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.964511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.964525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.964724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.964756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.965064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.965097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.965322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.965355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.965555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4cf30 is same with the state(5) to be set 00:34:22.431 [2024-07-15 19:40:32.965870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.965917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.966114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.431 [2024-07-15 19:40:32.966158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.431 qpair failed and we were unable to recover it. 00:34:22.431 [2024-07-15 19:40:32.966433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.966452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.966644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.966661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.966883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.966916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.967181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.967213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.967458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.967491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.967709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.967741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.968027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.968328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.968362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.968694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.968726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.968956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.968989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.969299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.969332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.969672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.969704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.969929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.969946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.970073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.970105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.970376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.970408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.970725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.970758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.971055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.971088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.971399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.971431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.971650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.971683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.971977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.972009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.972242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.972276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.972515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.972548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.972718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.972763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.972907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.972923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.973116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.973167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.973511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.973543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.973765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.973797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.974139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.974171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.974435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.974468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.974677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.974693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.974945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.974983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.975274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.975307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.975623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.975655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.975946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.975978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.976220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.976263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.976510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.976543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.976831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.976864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.977098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.977130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.977426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.977459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.977758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.977790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.432 qpair failed and we were unable to recover it. 00:34:22.432 [2024-07-15 19:40:32.978055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.432 [2024-07-15 19:40:32.978087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.978317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.978350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.978679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.978711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.978979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.979012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.979251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.979285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.979467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.979500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.979837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.979869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.980115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.980132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.980341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.980358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.980555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.980571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.980844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.980882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.981259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.981292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.981607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.981640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.981865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.981896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.982130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.982163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.982490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.982523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.982752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.982784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.983049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.983082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.983418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.983453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.983694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.984045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.984077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.984392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.984425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.984722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.984754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.985053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.985099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.985390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.985438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.985750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.985782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.986006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.986037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.986402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.986435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.986751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.986783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.987096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.987128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.987443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.987478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.987788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.988059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.988091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.988345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.988378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.988635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.988667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.988953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.988985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.989210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.989251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.989571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.989603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.989941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.989973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.990246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.990279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.990540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.433 [2024-07-15 19:40:32.990572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.433 qpair failed and we were unable to recover it. 00:34:22.433 [2024-07-15 19:40:32.990863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.990901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.991166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.991183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.991440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.991458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.991671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.991687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.991828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.992124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.992140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.992398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.992415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.992555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.992572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.992809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.992850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.993074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.993106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.993369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.993696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.993713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.993993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.994010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.994279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.994296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.994571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.994602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.994898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.994930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.995267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.995301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.995596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.995629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.995920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.995937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.996238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.996271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.996589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.996622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.996791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.996823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.997047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.997079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.997316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.997354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.997594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.997625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.997938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.997970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.998245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.998279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.998448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.998489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.998680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.998697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.998879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.998896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.999104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.999121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.999402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.999419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:32.999693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:32.999726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.000022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.000055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.000354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.000403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.000723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.000740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.000899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.000916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.001115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.001421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.001455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.001714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.001746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.001990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.434 [2024-07-15 19:40:33.002023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.434 qpair failed and we were unable to recover it. 00:34:22.434 [2024-07-15 19:40:33.002316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.002349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.002665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.002697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.002984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.003017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.003341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.003375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.003598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.003631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.003865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.003881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.004135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.004172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.004350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.004382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.004623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.004655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.004926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.004943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.005146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.005163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.005349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.005367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.005643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.005676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.006011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.006044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.006360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.006393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.006703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.006744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.006867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.006883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.007000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.007042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.007200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.007247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.007468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.007500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.007745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.007762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.008064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.008095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.008334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.008373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.008665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.008697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.008997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.009029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.009340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.009374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.009670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.009703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.010007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.010024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.010237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.010255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.010552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.010584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.010838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.010870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.011170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.011202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.011515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.011548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.011840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.011872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.012188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.012220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.012533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.012566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.012790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.012822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.435 qpair failed and we were unable to recover it. 00:34:22.435 [2024-07-15 19:40:33.013130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.435 [2024-07-15 19:40:33.013146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.013277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.013295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.013554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.013587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.013820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.013853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.014135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.014152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.014362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.014380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.014685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.014718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.014937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.014969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.015221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.015264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.015601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.015633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.015894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.015926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.016173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.016206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.016562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.016907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.016940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.017245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.017279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.017506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.017538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.017848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.017881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.018191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.018223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.018530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.018562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.018867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.018907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.019111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.019129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.019384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.019424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.019732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.019764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.020076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.020109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.020408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.020441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.020687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.020724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.021056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.021089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.021324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.021357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.021669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.021703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.021926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.021959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.022197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.022239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.022537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.022569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.022804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.022836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.023148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.023180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.023415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.023449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.436 qpair failed and we were unable to recover it. 00:34:22.436 [2024-07-15 19:40:33.023697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.436 [2024-07-15 19:40:33.023713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.023923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.023939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.024244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.024277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.024518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.024551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.024897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.024929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.025249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.025283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.025561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.025593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.025815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.025847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.026067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.026100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.026417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.026450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.026729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.026761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.026999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.027032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.027275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.027309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.027652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.027684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.027944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.027960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.028246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.028264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.028472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.028490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.028828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.028906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.029177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.029212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.029538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.029573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.029894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.029926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.030157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.030190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.030524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.030558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.030812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.030843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.031135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.031166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.031427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.031462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.031677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.031690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.031967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.031999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.032252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.032287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.032527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.032563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.032743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.032760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.032987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.033019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.033313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.033346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.033663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.033696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.034028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.034060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.034354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.034388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.034547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.034578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.034861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.034873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.035158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.035190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.035532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.035566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.035878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.437 [2024-07-15 19:40:33.035910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.437 qpair failed and we were unable to recover it. 00:34:22.437 [2024-07-15 19:40:33.036196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.036240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.036505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.036537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.036859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.036891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.037116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.037149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.037441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.037475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.037699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.037712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.037984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.038016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.038249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.038282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.038616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.038648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.038959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.038991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.039155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.039187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.039487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.039521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.039821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.039854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.040090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.040122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.040353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.040674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.040706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.041026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.041059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.041300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.041334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.041562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.041594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.041812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.041844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.042075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.042108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.042335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.042368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.042667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.042700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.042868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.042901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.043206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.043219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.043415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.043429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.043576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.043589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.043842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.043875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.044165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.044197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.044517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.044554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.044871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.044903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.045184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.045216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.045403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.045436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.045683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.045715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.046051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.046082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.046322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.046356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.046606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.046638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.046905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.046938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.047239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.047504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.047536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.047695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.438 [2024-07-15 19:40:33.047709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.438 qpair failed and we were unable to recover it. 00:34:22.438 [2024-07-15 19:40:33.047981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.048012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.048329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.048361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.048604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.048637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.048954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.048985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.049304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.049628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.049660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.049953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.049984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.050136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.050168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.050397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.050431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.050687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.050718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.050974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.051007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.051343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.051378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.051629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.051662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.051902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.051935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.052156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.052170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.052440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.052474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.052789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.052824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.053060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.053077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.053207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.053221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.053474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.053507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.053676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.053718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.053950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.053990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.054251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.054286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.054533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.054915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.054928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.055129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.055160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.055446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.055480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.055738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.055770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.056008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.056047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.056321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.056351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.056684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.056715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.056890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.056922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.057205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.057247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.057405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.057437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.057756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.057788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.058030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.058071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.058349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.058556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.058569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.058763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.058796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.059055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.059087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.059431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.439 [2024-07-15 19:40:33.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.439 qpair failed and we were unable to recover it. 00:34:22.439 [2024-07-15 19:40:33.059643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.059675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.059928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.059961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.060270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.060283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.060482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.060495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.060793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.060825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.061112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.061144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.061387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.061736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.061768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.062046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.062078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.062310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.062343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.062659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.440 [2024-07-15 19:40:33.062691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.440 qpair failed and we were unable to recover it. 00:34:22.440 [2024-07-15 19:40:33.062969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.063001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.063309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.063343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.063645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.063677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.064031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.064064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.064354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.064368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.064553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.064566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.064844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.064877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.065037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.065069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.065307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.065340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.065568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.065600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.065941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.065972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.066236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.066249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.066484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.066497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.066635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.066666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.066929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.066961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.067193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.067238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.067559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.067597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.067930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.067962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.068174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.068187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.068455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.068469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.068660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.068693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.068986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.069018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.069302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.069335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.069628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.069660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.069961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.069993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.070327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.070340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.070590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.070623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.070935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.070968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.071266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.071300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.071530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.071562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.071805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.071818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.072076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.072108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.072371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.072404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.072674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.072706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.072879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.441 [2024-07-15 19:40:33.072911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.441 qpair failed and we were unable to recover it. 00:34:22.441 [2024-07-15 19:40:33.073141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.073172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.073350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.073383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.073603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.073634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.073960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.073992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.074247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.074260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.074483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.074514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.074827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.074860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.075044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.075057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.075279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.075312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.075563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.075594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.075881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.075912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.076083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.076115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.076359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.076393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.076567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.076599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.076933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.076966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.077267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.077302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.077543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.077575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.077745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.077777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.078061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.078097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.078332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.078365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.078604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.078635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.078884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.078922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.079186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.079199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.079413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.079426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.079674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.079688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.079809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.079822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.080111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.080142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.080408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.080441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.080611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.080642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.080867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.080899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.081129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.081160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.081414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.081447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.081741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.081773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.082007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.082040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.082284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.082316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.082550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.082582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.082884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.082916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.083216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.083235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.083426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.083458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.083695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.083728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.083887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.442 [2024-07-15 19:40:33.083900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.442 qpair failed and we were unable to recover it. 00:34:22.442 [2024-07-15 19:40:33.084109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.084142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.084392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.084427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.084662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.084694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.084881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.084913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.085151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.085181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.085495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.085528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.085863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.085895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.086240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.086274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.086450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.086796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.086827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.087003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.087034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.087372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.087406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.087722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.087754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.088071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.088103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.088409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.088423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.088572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.088585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.088774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.088787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.088999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.089012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.089215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.089240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.089516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.089832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.089863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.090131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.090163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.090471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.090504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.090748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.090780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.091129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.091143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.091424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.091457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.091703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.091735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.092054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.092085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.092321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.092334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.092463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.092477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.092719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.092733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.092877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.092908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.093220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.093275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.093517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.093549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.093882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.094192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.094235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.094558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.094590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.094834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.094867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.095194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.095236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.095551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.095584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.095900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.443 [2024-07-15 19:40:33.095932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.443 qpair failed and we were unable to recover it. 00:34:22.443 [2024-07-15 19:40:33.096082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.096123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.096371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.096386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.096516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.096662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.096674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.096925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.096971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.097214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.097257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.097487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.097525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.097863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.098071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.098102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.098421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.098455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.098689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.098720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.098951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.098964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.099190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.099203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.099469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.099483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.099731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.099767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.100055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.100087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.100343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.100377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.100636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.100669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.100892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.100923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.101083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.101114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.101405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.101418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.101693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.101706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.101949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.101963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.102107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.102120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.102321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.102334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.102608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.102641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.102889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.102921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.103188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.103220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.103411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.103444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.103681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.103712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.103919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.103932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.104122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.104153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.104491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.104526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.104816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.104848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.105162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.105194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.105435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.105468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.105732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.105763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.106036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.106050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.106234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.106388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.106401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.106683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.444 [2024-07-15 19:40:33.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.444 qpair failed and we were unable to recover it. 00:34:22.444 [2024-07-15 19:40:33.106953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.106984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.107296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.107329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.107626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.107657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.107970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.108002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.108263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.108296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.108524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.108794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.108826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.109051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.109083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.109305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.109338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.109586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.109617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.109860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.109892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.110187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.110219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.110472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.110486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.110681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.110694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.111024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.111222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.111389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.111553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.111810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.111987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.112019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.112254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.112268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.112458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.112471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.112620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.112652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.112965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.112997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.113303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.113318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.113600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.113632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.113987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.114018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.114270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.114305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.114548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.114579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.114867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.114909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.115108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.115121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.115319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.115333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.115518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.115532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.445 [2024-07-15 19:40:33.115735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.445 [2024-07-15 19:40:33.115767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.445 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.115993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.116025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.116248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.116282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.116599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.116631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.116857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.116889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.117237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.117272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.117576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.117607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.117897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.117933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.118260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.118293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.118629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.118662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.118881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.118913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.119144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.119175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.119495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.119539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.119838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.119870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.120165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.120197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.120608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.120686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.121040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.121076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.121369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.121389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.121646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.121664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.121784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.121802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.122071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.122088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.122395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.122544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.122582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.122820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.122852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.123018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.123050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.123406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.123423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.123670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.123703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.123944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.123961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.124104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.124121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.124378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.124397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.124702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.124719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.124937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.124973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.125284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.125318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.125570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.125601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.125841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.125874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.126138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.126353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.126370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.126488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.126503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.126668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.126698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.127013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.127051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.127202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.127217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.446 [2024-07-15 19:40:33.127439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.446 [2024-07-15 19:40:33.127473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.446 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.127769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.127802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.128129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.128161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.128388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.128421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.128741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.129005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.129037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.129320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.129337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.129618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.129651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.129838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.129854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.129972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.129987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.130185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.130199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.130432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.130447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.130581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.130596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.130785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.130814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.131059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.131091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.131403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.131436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.131685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.131717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.131892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.131921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.132248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.132281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.132525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.132557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.132794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.132826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.133138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.133171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.133340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.133355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.133555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.133584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.133891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.133908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.134097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.134117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.134321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.134339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.134554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.134571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.134826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.134862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.135101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.135132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.135440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.135473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.135627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.135659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.135885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.135917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.136156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.136189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.136494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.136531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.136830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.136861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.137087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.137128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.137406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.137423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.137710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.137742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.138035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.138071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.138293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.138310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.138497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.447 [2024-07-15 19:40:33.138515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.447 qpair failed and we were unable to recover it. 00:34:22.447 [2024-07-15 19:40:33.138729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.138760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.138968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.138985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.139199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.139254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.139518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.139551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.139772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.139804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.140024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.140055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.140341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.140359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.140597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.140613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.140883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.140914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.141273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.141307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.141512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.141543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.141931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.141964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.142123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.142155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.142384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.142417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.142663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.142699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.142842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.143036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.143069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.143314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.143346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.143575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.143606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.143773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.143805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.144019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.144034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.144320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.144352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.144534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.144566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.144885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.144921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.145141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.145174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.145481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.145498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.145633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.145649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.145860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.145876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.146064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.146097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.146283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.146316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.146468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.146500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.146723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.146755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.146993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.147025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.147271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.147318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.147505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.147522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.147672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.147689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.147894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.147926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.148221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.148265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.148445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.148478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.148790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.148822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.149131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.149163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.448 [2024-07-15 19:40:33.149401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.448 [2024-07-15 19:40:33.149418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.448 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.149610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.149626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.149911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.149944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.150130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.150148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.150355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.150372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.150561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.150593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.150835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.150867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.151171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.151187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.151312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.151329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.151596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.151628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.151967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.152018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.152200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.152216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.152485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.152502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.152690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.152707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.153009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.153026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.153234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.153252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.153386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.153404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.153730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.153761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.154096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.154128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.154421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.154455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.154756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.154787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.155101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.155134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.155421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.155438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.155565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.155580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.155888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.156201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.156243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.156569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.156601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.156920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.156953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.157192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.157208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.157494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.157512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.157742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.157759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.157949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.157966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.158166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.158184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.158475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.158510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.158729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.158761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.159127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.449 [2024-07-15 19:40:33.159160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.449 qpair failed and we were unable to recover it. 00:34:22.449 [2024-07-15 19:40:33.159492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.159526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.159812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.159861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.160163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.160195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.160482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.160516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.160769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.160801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.161145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.161179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.161393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.161411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.161708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.161742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.162056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.162329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.162570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.162603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.162837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.162869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.163167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.163200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.163517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.163549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.163797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.163830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.164082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.164115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.164274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.164307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.164611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.164643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.164921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.164953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.165177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.165209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.165513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.165546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.165787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.165817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.166040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.166073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.166292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.166325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.166640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.166672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.166973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.167006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.167333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.167367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.167590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.167621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.167808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.167840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.168155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.168188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.168423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.168440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.168776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.168807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.169050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.169083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.169336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.169678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.169709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.169989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.170022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.170324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.170357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.170624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.170656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.171002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.171033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.171331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.171375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.171615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.171645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.450 qpair failed and we were unable to recover it. 00:34:22.450 [2024-07-15 19:40:33.171876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.450 [2024-07-15 19:40:33.171919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.172085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.172130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.172419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.172462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.172669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.172687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.172981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.172998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.173203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.173219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.173416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.173433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.173723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.173754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.174085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.174117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.174433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.174467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.174714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.174745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.175096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.175128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.175417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.175435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.175672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.175688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.175995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.176036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.176350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.176384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.176617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.176650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.176977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.177009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.177316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.177333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.177541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.177558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.177839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.177855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.178143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.178175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.178435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.178469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.178659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.178691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.178936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.178968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.179204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.179257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.179470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.179487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.179780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.179811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.180047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.180079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.180313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.180347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.180658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.180690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.181005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.181038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.181329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.181362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.181550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.181582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.181921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.181952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.182252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.182269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.182524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.182541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.182796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.182813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.183054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.183088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.183311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.183343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.183657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.451 [2024-07-15 19:40:33.183689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.451 qpair failed and we were unable to recover it. 00:34:22.451 [2024-07-15 19:40:33.183991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.184065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.184323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.184361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.184701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.184735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.185051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.185085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.185394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.185428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.185723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.185754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.185962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.185995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.186317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.186350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.186501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.186533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.186844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.186876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.187199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.187216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.187434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.187450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.187658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.187674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.187958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.188000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.188336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.188368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.188680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.188712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.189040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.189080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.189270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.189287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.189494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.189510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.189808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.189840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.190006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.190039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.190303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.190337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.190565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.190581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.190773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.190790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.190991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.191008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.191146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.191163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.191444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.191478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.191703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.191735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.191971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.192004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.192245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.192262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.192572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.192604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.192926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.192958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.193217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.193240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.193437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.193470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.193685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.193717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.193955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.193996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.194252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.194270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.194488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.194520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.194834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.194867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.195169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.195187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.195503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.452 [2024-07-15 19:40:33.195521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.452 qpair failed and we were unable to recover it. 00:34:22.452 [2024-07-15 19:40:33.195824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.195855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.196144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.196176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.196409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.196443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.196756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.196788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.197083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.197116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.197427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.197459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.197755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.197787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.198100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.198131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.198451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.198484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.198801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.198832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.199146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.199178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.199507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.199541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.199771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.199813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.200071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.200104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.200401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.200435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.200743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.200776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.201081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.201113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.201379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.201396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.201677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.201967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.201983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.202279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.202313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.202643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.202676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.202914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.202945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.203270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.203305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.203621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.203652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.203894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.203926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.204275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.204309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.204569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.204600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.204939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.204970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.205253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.205296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.205535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.205568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.205856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.205888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.206122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.206166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.206427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.206444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.206723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.453 [2024-07-15 19:40:33.207028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.453 [2024-07-15 19:40:33.207060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.453 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.207307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.207340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.207583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.207615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.207866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.208261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.208339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.208696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.208734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.209054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.209088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.209403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.209436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.209699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.209732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.209981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.210013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.210248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.210281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.210635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.210667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.210961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.210993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.211281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.211298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.211556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.211724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.211741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.211958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.211990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.212324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.212357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.212640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.212657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.212864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.212882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.213079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.213096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.213372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.213389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.213609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.213626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.213823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.213839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.214046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.214064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.214357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.214391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.214723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.214755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.215017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.215049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.215282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.215315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.215634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.215666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.215926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.215958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.216181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.216219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.216551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.216584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.216813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.216846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.217209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.217249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.217561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.217593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.217829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.217861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.218220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.218263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.218571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.218603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.218847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.218879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.219191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.219223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.454 qpair failed and we were unable to recover it. 00:34:22.454 [2024-07-15 19:40:33.219478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.454 [2024-07-15 19:40:33.219510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.219802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.219833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.220067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.220099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.220319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.220353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.220628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.220667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.221012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.221043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.221276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.221308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.221618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.221651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.221963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.221995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.222305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.222322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.222525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.222543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.222754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.222771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.222930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.222946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.223147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.223179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.223484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.223518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.223746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.223778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.224125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.224364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.224403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.224643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.224677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.224946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.224978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.225301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.225334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.225621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.225638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.225861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.225877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.226066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.226083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.226295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.226313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.226502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.226519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.226801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.226833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.227174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.227206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.227392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.227438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.227693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.227710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.227983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.227999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.228285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.228318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.228611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.228642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.228959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.228991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.229287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.229319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.229561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.229594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.229890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.229922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.230180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.230213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.230480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.230513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.230849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.230881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.231168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.455 [2024-07-15 19:40:33.231199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.455 qpair failed and we were unable to recover it. 00:34:22.455 [2024-07-15 19:40:33.231461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.231495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.231787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.231819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.232036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.232068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.232383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.232417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.232649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.232681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.233010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.233043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.233357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.233390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.233546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.233561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.233763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.233794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.234114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.234146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.234398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.234415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.234542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.234558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.234788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.234819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.235131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.235162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.235413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.235431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.235720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.235752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.236001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.236032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.236261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.236319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.236656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.236671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.236922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.236955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.237255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.237290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.237602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.237634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.237892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.237924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.238191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.238222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.238547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.238579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.238869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.238901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.239245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.239283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.239527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.239540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.239794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.239826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.240064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.240095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.240405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.240448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.240774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.240806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.241124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.241156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.241389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.241424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.241755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.241787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.242078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.242110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.242429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.242463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.242758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.242790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.243086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.243119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.243452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.243486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.243800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.456 [2024-07-15 19:40:33.243832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.456 qpair failed and we were unable to recover it. 00:34:22.456 [2024-07-15 19:40:33.244093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.244126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.244457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.244471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.244721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.244753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.245071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.245104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.245390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.245416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.245678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.245691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.245821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.245835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.246054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.246068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.246266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.246279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.246564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.246598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.246821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.246853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.247161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.247195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.247541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.247574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.247766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.247798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.248089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.248121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.248422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.248456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.248766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.248844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.249138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.249174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.249505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.249541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.249869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.249902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.250070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.250102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.250319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.250354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.250589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.250622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.250942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.250975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.251162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.251179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.251455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.251473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.251752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.251769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.251978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.252010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.252300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.252333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.252577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.252599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.252737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.252755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.253011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.253027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.253288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.253321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.253635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.253667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.253911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.253943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.254250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.254283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.254578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.254614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.254933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.254966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.255296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.255329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.255601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.255633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.255919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.457 [2024-07-15 19:40:33.255951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.457 qpair failed and we were unable to recover it. 00:34:22.457 [2024-07-15 19:40:33.256283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.256316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.256561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.256595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.256838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.256870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.257194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.257233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.257402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.257434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.257733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.257765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.258068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.258100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.258410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.258443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.258738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.258770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.259083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.259116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.259425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.259469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.259702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.259718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.259921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.259938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.260083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.260114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.260288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.260321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.260690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.260737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.261045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.261082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.261396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.261440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.261731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.261764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.262072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.262104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.262418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.262451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.262702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.262735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.262906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.262938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.263238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.263270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.263578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.263611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.263890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.263923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.264269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.264301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.264596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.264875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.264917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.265266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.265299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.265595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.265628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.265964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.265996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.266317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.266352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.266697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.266729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.266998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.267031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.458 qpair failed and we were unable to recover it. 00:34:22.458 [2024-07-15 19:40:33.267345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.458 [2024-07-15 19:40:33.267379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.267628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.267663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.267969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.268001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.268307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.268340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.268562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.268831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.268847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.459 [2024-07-15 19:40:33.269052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.459 [2024-07-15 19:40:33.269083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.459 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.269349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.269384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.269691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.269734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.270055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.270087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.270405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.270423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.270612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.270645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.270872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.270905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.271134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.271166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.271548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.271582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.271749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.734 [2024-07-15 19:40:33.271781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.734 qpair failed and we were unable to recover it. 00:34:22.734 [2024-07-15 19:40:33.272107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.272139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.272379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.272412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.272737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.272769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.273121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.273153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.273397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.273436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.273701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.273733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.274050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.274082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.274352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.274391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.274710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.274742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.274999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.275032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.275280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.275315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.275538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.275554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.275814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.275846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.276163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.276195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.276432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.276449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.276722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.276739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.276936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.276968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.277280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.277313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.277643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.277675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.277904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.277936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.278247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.278280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.278585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.278602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.278791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.278808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.278932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.278948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.279255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.279289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.279521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.279552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.735 [2024-07-15 19:40:33.279805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.735 [2024-07-15 19:40:33.279821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.735 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.279955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.279973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.280196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.280249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.280528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.280561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.280873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.280905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.281200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.281243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.281548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.281579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.281873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.281906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.282150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.282181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.282545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.282578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.282763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.282795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.283053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.283085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.283390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.283424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.283639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.283656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.283843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.283860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.284152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.284184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.284511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.284544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.284766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.285061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.285099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.285392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.285425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.285652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.285669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.285856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.285874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.286075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.286107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.286466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.286672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.286689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.286916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.286948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.287266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.287299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.736 [2024-07-15 19:40:33.287585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.736 [2024-07-15 19:40:33.287602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.736 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.287888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.287905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.288025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.288042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.288175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.288191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.288386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.288670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.288702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.288962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.289217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.289262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.289481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.289497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.289778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.289810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.290163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.290196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.290525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.290558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.290734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.290766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.290999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.291031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.291350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.291383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.291628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.291645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.291862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.291894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.292147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.292180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.292424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.292458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.292772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.292803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.293097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.293129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.293373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.293390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.293648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.293694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.293916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.293949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.294264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.294297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.294590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.294621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.294939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.737 qpair failed and we were unable to recover it. 00:34:22.737 [2024-07-15 19:40:33.295268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.737 [2024-07-15 19:40:33.295301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.295625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.295656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.295985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.296017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.296329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.296362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.296662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.296700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.296936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.296968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.297214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.297257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.297516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.297548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.297771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.297803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.298100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.298132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.298383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.298416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.298725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.298756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.299066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.299098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.299327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.299344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.299559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.299880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.299912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.300125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.300157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.300468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.300502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.300746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.300783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.301021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.301321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.301355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.301684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.301701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.301907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.301923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.302124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.302142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.302415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.302448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.302688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.302722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.302962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.302993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.303283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.738 [2024-07-15 19:40:33.303317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.738 qpair failed and we were unable to recover it. 00:34:22.738 [2024-07-15 19:40:33.303546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.303578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.303743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.303775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.304066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.304098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.304418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.304452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.304768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.304800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.305074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.305106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.305390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.305407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.305716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.305748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.306042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.306075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.306416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.306449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.306735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.306752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.307032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.307049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.307261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.307278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.307471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.307488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.307708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.307724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.307922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.307938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.308133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.308170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.308485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.308518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.308819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.308851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.309156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.309173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.309303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.309320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.309535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.309552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.309828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.309845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.310061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.310077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.310282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.310300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.310498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.310515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.310734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.310751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.310953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.311157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.311173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.311363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.311381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.311680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.311713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.311952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.311984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.312302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.312349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.312523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.312540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.312723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.739 [2024-07-15 19:40:33.312739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.739 qpair failed and we were unable to recover it. 00:34:22.739 [2024-07-15 19:40:33.312996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.313293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.313326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.313574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.313606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.313921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.313953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.314298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.314316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.314546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.314563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.314822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.314839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.314988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.315006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.315289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.315324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.315590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.315959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.315992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.316302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.316336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.316635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.316668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.316905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.316938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.317164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.317207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.317559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.317886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.317918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.318183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.318216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.318476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.318509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.318837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.318855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.319110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.319126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.319327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.319371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.319592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.319625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.319855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.319887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.320177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.320210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.320443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.320460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.320746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.320777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.321043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.321075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.321255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.321290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.321580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.321613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.321866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.321897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.322144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.322176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.322421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.322454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.740 qpair failed and we were unable to recover it. 00:34:22.740 [2024-07-15 19:40:33.322741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.740 [2024-07-15 19:40:33.322773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.323067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.323099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.323331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.323364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.323651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.323683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.323981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.324013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.324324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.324358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.324586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.324603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.324821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.324837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.325022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.325038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.325275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.325293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.325552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.325584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.325746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.325778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.325965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.325998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.326247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.326281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.326578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.326614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.326920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.326952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.327116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.327149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.327367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.327401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.327626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.327658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.327852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.328127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.328159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.328394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.328428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.328658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.328675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.328899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.328917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.329047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.329063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.329249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.329283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.329461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.329493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.329781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.329813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.330136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.330174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.330490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.330523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.330741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.330757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.331027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.331059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.331303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.331335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.331525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.331557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.331801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.331832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.332063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.332095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.332413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.332446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.332599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.332615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.332816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.332832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.333092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.333123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.333438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.741 [2024-07-15 19:40:33.333471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.741 qpair failed and we were unable to recover it. 00:34:22.741 [2024-07-15 19:40:33.333699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.334067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.334099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.334292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.334325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.334616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.334633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.334816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.334832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.335087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.335103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.335327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.335361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.335620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.335636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.335893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.335910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.336036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.336051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.336337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.336354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.336607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.336623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.336817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.336850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.337005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.337037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.337351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.337385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.337643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.337674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.338022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.338054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.338284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.338317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.338634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.338666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.338908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.338940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.339193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.339238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.339514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.339531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.339758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.339790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.340102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.340135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.340366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.340401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.340627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.340659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.340813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.340846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.341069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.341105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.341370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.341403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.341666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.341698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.341934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.342255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.342288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.342463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.342495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.342737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.342769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.342995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.343026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.343358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.343391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.343701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.343733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.343966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.343997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.344249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.344282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.344480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.742 [2024-07-15 19:40:33.344496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.742 qpair failed and we were unable to recover it. 00:34:22.742 [2024-07-15 19:40:33.344660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.344692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.344940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.344972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.345324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.345357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.345607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.345639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.345795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.345812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.346121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.346138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.346334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.346366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.346684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.346716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.347048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.347094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.347344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.347377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.347633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.347858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.347890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.348180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.348212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.348551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.348584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.348839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.348872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.349098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.349130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.349464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.349498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.349759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.349791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.350055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.350088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.350305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.350338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.350489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.350522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.350722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.350761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.350990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.351023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.351338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.351371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.351708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.351725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.351873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.351889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.352099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.352131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.352361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.352400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.352721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.352760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.353043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.353075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.353320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.353354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.353671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.353702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.353987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.354020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.354314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.354347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.354632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.354664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.354977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.355010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.355195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.355239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.355412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.355444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.355627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.355643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.355919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.355935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.743 qpair failed and we were unable to recover it. 00:34:22.743 [2024-07-15 19:40:33.356258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.743 [2024-07-15 19:40:33.356292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.356538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.356571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.356913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.356946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.357245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.357279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.357577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.357618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.357840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.357857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.358046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.358064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.358277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.358311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.358535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.358552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.358825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.358857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.359157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.359189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.359432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.359465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.359781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.359814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.359993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.360025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.360350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.360383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.360719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.360751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.361012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.361028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.361250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.361267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.361571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.361587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.361800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.361817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.362015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.362032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.362325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.362359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.362673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.362704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.362999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.363031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.363346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.363379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.363621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.363652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.363950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.363982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.364294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.364333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.364599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.364631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.364947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.364979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.365215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.365258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.365552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.365594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.365885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.365917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.366151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.366183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.366491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.744 [2024-07-15 19:40:33.366524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.744 qpair failed and we were unable to recover it. 00:34:22.744 [2024-07-15 19:40:33.366802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.366834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.367146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.367178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.367510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.367544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.367792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.367824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.368073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.368380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.368414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.368711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.368727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.368980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.369022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.369255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.369288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.369605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.369636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.369880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.369911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.370150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.370183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.370442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.370475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.370766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.370798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.371115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.371147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.371381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.371414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.371648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.371680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.371985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.372017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.372321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.372355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.372595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.372627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.372796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.372827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.373142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.373175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.373467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.373500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.373737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.373770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.374094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.374125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.374407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.374441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.374662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.374694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.375012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.375044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.375385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.375419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.375665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.375697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.376005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.376037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.376339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.376689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.376731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.377065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.377097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.377368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.377401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.377720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.377752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.378086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.378119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.378439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.378473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.378656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.378688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.378906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.378939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.379263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.745 [2024-07-15 19:40:33.379295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.745 qpair failed and we were unable to recover it. 00:34:22.745 [2024-07-15 19:40:33.379569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.379602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.379923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.379955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.380176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.380208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.380531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.380564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.380717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.381096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.381139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.381365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.381399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.381690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.381722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.382013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.382045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.382280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.382313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.382552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.382584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.382837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.382869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.383185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.383217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.383540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.383572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.383863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.383894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.384240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.384273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.384515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.384548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.384798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.384815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.385026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.385043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.385303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.385351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.385591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.385623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.385884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.385915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.386179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.386211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.386556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.386589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.386876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.386907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.387238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.387271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.387509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.387541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.387828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.387859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.388163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.388193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.388575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.388652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.388989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.389025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.389352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.389406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.389584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.389602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.389804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.389836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.390121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.390154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.390427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.390460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.390756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.390788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.391102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.391133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.391449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.391480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.391710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.391743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.746 [2024-07-15 19:40:33.391914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.746 [2024-07-15 19:40:33.391930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.746 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.392145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.392176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.392438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.392471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.392780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.392812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.393094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.393126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.393353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.393386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.393620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.393652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.393974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.394006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.394290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.394322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.394555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.394587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.394902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.394933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.395244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.395277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.395573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.395605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.395823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.395840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.396034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.396066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.396365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.396398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.396703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.396736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.397039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.397072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.397322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.397356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.397599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.397631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.397919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.397951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.398248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.398282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.398527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.398559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.398802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.398819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.399073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.399090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.399325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.399342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.399612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.399629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.399814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.399831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.400139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.400171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.400492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.400525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.400786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.400803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.401065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.401103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.401413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.401447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.401699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.401716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.402024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.402056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.402305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.402337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.402633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.402664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.402819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.402852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.403128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.403170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.403432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.403465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.403711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.747 [2024-07-15 19:40:33.403743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.747 qpair failed and we were unable to recover it. 00:34:22.747 [2024-07-15 19:40:33.404034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.404066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.404289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.404322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.404639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.404671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.405015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.405047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.405386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.405420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.405710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.405743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.406023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.406055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.406279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.406312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.406604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.406636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.406865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.406896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.407114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.407131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.407354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.407371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.407581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.407613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.407831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.407863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.408127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.408159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.408451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.408485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.408783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.408815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.409138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.409171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.409478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.409513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.409754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.409786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.410102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.410134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.410427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.410461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.410685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.410718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.410944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.410976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.411266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.411298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.411614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.411646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.411960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.411998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.412205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.412222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.412482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.412521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.412750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.412781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.413094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.413132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.413373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.413406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.413603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.413634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.413917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.413958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.414193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.414237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.414487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.414519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.414868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.414899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.415212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.415256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.415549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.415581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.415865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.415896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.748 [2024-07-15 19:40:33.416186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.748 [2024-07-15 19:40:33.416219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.748 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.416541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.416573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.416765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.416797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.416980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.416997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.417198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.417214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.417508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.417541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.417771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.417788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.418069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.418087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.418269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.418287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.418477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.418509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.418830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.418862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.419099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.419116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.419414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.419449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.419740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.419962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.419994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.420256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.420289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.420474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.420505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.420821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.420841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.420978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.420995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.421275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.421308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.421599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.421631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.421982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.422027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.422340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.422374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.422675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.422706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.423009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.423281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.423314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.423615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.423646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.423812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.423844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.424157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.424188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.424478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.424512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.424846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.424878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.425200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.425243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.425557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.425590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.425908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.425939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.426272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.426307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.426624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.749 [2024-07-15 19:40:33.426656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.749 qpair failed and we were unable to recover it. 00:34:22.749 [2024-07-15 19:40:33.427002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.427035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.427273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.427305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.427625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.427658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.427977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.428010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.428180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.428212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.428535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.428836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.428853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.429052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.429068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.429327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.429370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.429657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.429689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.429875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.429907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.430205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.430269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.430545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.430578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.430890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.430906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.431196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.431240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.431564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.431597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.431891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.431923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.432246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.432280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.432566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.432844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.432877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.433103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.433119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.433402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.433441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.433733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.433765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.434067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.434099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.434407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.434441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.434684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.435067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.435100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.435413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.435446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.435674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.435707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.436041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.436072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.436389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.436421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.436657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.436689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.436979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.437011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.437258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.437291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.437533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.437566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.437929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.437961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.438221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.438271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.438508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.438541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.438801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.438832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.439047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.439065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.750 [2024-07-15 19:40:33.439279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.750 [2024-07-15 19:40:33.439311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.750 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.439544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.439576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.439801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.439833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.440143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.440176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.440479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.440511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.440758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.440791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.441105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.441138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.441382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.441668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.441700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.442042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.442073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.442389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.442423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.442676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.442707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.442956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.442988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.443280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.443314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.443548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.443580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.443757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.443773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.443976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.443993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.444270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.444304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.444657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.444690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.444923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.444954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.445262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.445297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.445619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.445935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.445968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.446284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.446318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.446482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.446515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.446803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.446844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.446997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.447029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.447255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.447288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.447603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.447634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.447860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.447892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.448196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.448213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.448529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.448562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.448824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.448857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.449192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.449237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.449555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.449589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.449831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.449863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.450027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.450043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.450339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.450374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.450665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.450697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.451014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.451030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.451337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.451370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.751 qpair failed and we were unable to recover it. 00:34:22.751 [2024-07-15 19:40:33.451689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.751 [2024-07-15 19:40:33.451721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.451943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.451975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.452287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.452320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.452571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.452604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.452946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.452978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.453293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.453325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.453600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.453643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.453870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.453886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.454018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.454035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.454321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.454356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.454574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.454605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.454832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.454845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.455101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.455129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.455367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.455398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.455654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.455682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.455944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.455958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.456252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.456284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.456570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.456598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.456932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.456960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.457175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.457202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.457528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.457563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.457789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.457804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.457987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.458001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.458214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.458273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.458499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.458530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.458819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.458833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.458982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.458998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.459179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.459194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.459365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.459397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.459705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.459738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.460042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.460060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.460289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.460323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.460635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.460667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.460999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.461032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.461284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.461317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.461632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.461664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.461988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.462026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.462272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.462318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.462660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.462695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.462943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.752 [2024-07-15 19:40:33.462977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.752 qpair failed and we were unable to recover it. 00:34:22.752 [2024-07-15 19:40:33.463170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.463203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.463535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.463571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.463801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.463818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.464099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.464131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.464365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.464405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.464646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.464681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.464985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.465020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.465309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.465344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.465516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.465551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.465776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.465811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.466128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.466499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.466536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.466875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.466909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.467141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.467173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.467363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.467407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.467644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.467678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.467901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.467933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.468158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.468190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.468428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.468462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.468711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.468746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.468975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.468998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.469255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.469272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.469463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.469480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.469627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.469662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.469857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.469891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.470119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.470154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.470383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.470417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.470709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.470742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.470939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.470976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.471265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.471284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.471478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.471495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.471694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.471712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.471919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.471936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.472191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.472208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.472446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.753 [2024-07-15 19:40:33.472463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.753 qpair failed and we were unable to recover it. 00:34:22.753 [2024-07-15 19:40:33.472683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.472700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.472952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.472994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.473168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.473200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.473443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.473482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.473671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.473704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.473955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.473987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.474221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.474266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.474430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.474463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.474721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.474738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.474925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.474941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.475137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.475169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.475477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.475511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.475847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.475890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.476074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.476090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.476301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.476334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.476556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.476588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.476882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.476924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.477211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.477234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.477434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.477451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.477662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.477678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.477886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.477903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.478115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.478135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.478347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.478368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.478570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.478587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.478845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.478862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.479160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.479203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.479516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.479549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.479831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.479877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.480188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.480223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.480432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.480464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.480760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.480793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.481044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.481063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.481251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.481285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.481579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.481613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.481852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.481891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.482276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.482311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.482608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.482641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.482957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.482988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.483282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.483316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.483629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.483661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.754 [2024-07-15 19:40:33.483883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.754 [2024-07-15 19:40:33.483915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.754 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.484101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.484132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.484423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.484456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.484769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.484802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.485106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.485122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.485447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.485481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.485662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.485694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.485936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.486199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.486215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.486513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.486556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.486721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.486752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.486974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.487006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.487241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.487258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.487396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.487414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.487716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.487752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.488067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.488099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.488377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.488410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.488701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.488734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.489049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.489081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.489302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.489320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.489574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.489784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.489801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.490108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.490151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.490404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.490438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.490692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.490724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.491022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.491042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.491317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.491335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.491545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.491563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.491774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.491791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.492004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.492036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.492261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.492295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.492560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.492591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.492882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.492900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.493122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.493139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.493347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.493364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.493546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.493563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.493847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.493879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.494191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.494223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.494486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.494518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.494817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.494849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.495146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.495179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.755 qpair failed and we were unable to recover it. 00:34:22.755 [2024-07-15 19:40:33.495489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.755 [2024-07-15 19:40:33.495522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.495785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.495816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.496049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.496091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.496248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.496264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.496462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.496479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.496627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.496673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.496929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.496961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.497187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.497219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.497389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.497421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.497649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.497681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.497994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.498027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.498367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.498402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.498689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.498721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.499055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.499071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.499201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.499219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.499414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.499431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.499652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.499685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.500007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.500026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.500285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.500330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.500625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.500665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.500945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.501300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.501336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.501654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.501687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.502002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.502034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.502300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.502340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.502657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.502689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.502957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.502989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.503158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.503190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.503421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.503697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.503729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.503976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.503993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.504299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.504332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.504566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.504599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.504851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.504884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.505127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.505159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.505411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.505428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.505612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.505629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.505914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.505945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.506216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.506269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.506611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.506643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.506867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.756 [2024-07-15 19:40:33.506899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.756 qpair failed and we were unable to recover it. 00:34:22.756 [2024-07-15 19:40:33.507212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.507253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.507591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.507624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.507782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.507822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.507999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.508016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.508209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.508253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.508499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.508530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.508822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.508853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.509170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.509202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.509461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.509495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.509744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.509776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.510074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.510106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.510421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.510455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.510712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.510745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.510924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.510956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.511182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.511200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.511387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.511404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.511600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.511631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.511944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.511975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.512271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.512288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.512497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.512513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.512794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.512810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.513008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.513025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.513316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.513349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.513582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.513619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.513852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.513884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.514203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.514219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.514350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.514368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.514562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.514594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.514907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.514940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.515178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.515209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.757 qpair failed and we were unable to recover it. 00:34:22.757 [2024-07-15 19:40:33.515490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.757 [2024-07-15 19:40:33.515523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.515692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.515724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.516036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.516067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.516298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.516331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.516612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.516643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.516930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.516946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.517201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.517218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.517505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.517522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.517812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.517844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.518167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.518199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.518500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.518533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.518795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.518827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.519073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.519104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.519328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.519362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.519653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.519684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.519867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.519883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.520109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.520126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.520384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.520410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.520687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.520704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.520903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.520921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.521203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.521220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.521422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.521440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.521715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.521732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.521956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.521973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.522158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.522175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.522435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.522453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.522729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.523070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.523103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.523324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.523357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.523670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.523702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.523994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.524025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.524355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.524388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.524610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.524642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.524944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.524997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.525328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.525362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.525655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.525687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.525998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.526029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.526207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.526231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.526464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.526496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.526751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.527085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.758 [2024-07-15 19:40:33.527118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.758 qpair failed and we were unable to recover it. 00:34:22.758 [2024-07-15 19:40:33.527420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.527453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.527695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.527728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.528016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.528047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.528281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.528298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.528588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.528619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.528955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.528987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.529283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.529318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.529579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.529611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.529842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.529874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.530197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.530268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.530462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.530494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.530808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.530841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.531129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.531160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.531319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.531354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.531585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.531621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.531939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.531971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.532205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.532257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.532449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.532466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.532685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.532717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.532939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.533016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.533417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.533626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.533658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.534002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.534034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.534391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.534427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.534653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.534687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.534989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.535021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.535354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.535389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.535714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.535747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.535986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.536021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.536339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.536373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.536614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.536646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.537011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.537043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.537284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.537318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.537597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.537631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.537815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.537849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.538068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.538086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.538411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.538444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.538742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.538773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.539085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.539118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.539378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.539415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.759 [2024-07-15 19:40:33.539732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.759 [2024-07-15 19:40:33.539765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.759 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.540084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.540116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.540435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.540470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.540698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.540731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.540980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.540997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.541113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.541131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.541424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.541466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.541654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.541687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.541949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.541982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.542298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.542336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.542582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.542615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.542837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.542878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.543127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.543163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.543572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.543607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.543914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.543947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.544267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.544285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.544543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.544590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.544832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.544864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.545043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.545074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.545362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.545381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.545658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.545691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.545933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.545966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.546289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.546324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.546508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.546540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.546784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.546818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.547039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.547071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.547247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.547281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.547528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.547561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.547716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.547747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.547963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.547979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.548311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.548345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.548644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.548675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.549016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.549047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.549291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.549329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.549553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.549586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.549813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.549844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.550068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.550100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.550335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.550370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.550566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.550598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.550788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.550820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.551128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.551159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.760 [2024-07-15 19:40:33.551461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.760 [2024-07-15 19:40:33.551494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.760 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.551804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.551836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.552065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.552083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.552315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.552333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.552539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.552557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.552822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.552838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.553034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.553052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.553286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.553304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.553596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.553628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.553801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.553834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.554050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.554081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.554340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.554357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.554568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.554600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.554892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.554925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.555148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.555179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.555486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.555520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.555708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.555740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.556035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.556051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.556251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.556268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.556481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.556519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.556756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.556788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.557124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.557142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.557411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.557444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.557735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.557768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.557989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.558021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.558246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.558279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.558512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.558545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.558856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.559197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.559245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.559543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.559575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.559894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.559927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.560222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.560264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.560571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.560603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.560823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.560855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.561094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.561126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.561439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.561457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.561711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.561728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.561941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.561973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.562198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.562241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.562468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.562486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.562664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.562681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.562890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.761 [2024-07-15 19:40:33.562921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.761 qpair failed and we were unable to recover it. 00:34:22.761 [2024-07-15 19:40:33.563147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.563179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.563372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.563406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.563721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.563752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.563927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.563959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.564193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.564464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.564481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.564668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.564684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.564969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.565002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.565251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.565285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.565579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.565596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.565801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.565818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.566107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.566125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.566406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.566423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.566627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.566644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.566853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.567131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.567162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.567450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.567483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.567803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.567835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.568081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.568160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.568548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.568586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.568890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.568922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.569243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.569292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.569534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.569566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.569856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.569887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.570125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.570157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.570467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.570501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.570762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.570793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.570954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.570987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.571238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.571271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.571547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.571582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.571846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.571878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:22.762 [2024-07-15 19:40:33.572142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.762 [2024-07-15 19:40:33.572175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:22.762 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.572484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.572519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.572750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.572784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.573061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.573093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.573414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.573448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.573683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.573715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.573939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.573971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.574281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.574315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.574559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.574591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.574945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.574976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.575302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.575319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.575538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.575555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.575694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.575711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.575924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.575956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.576307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.576340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.576567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.576584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.576842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.576885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.577203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.577248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.577518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.577552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.577901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.577933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.578169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.578185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.043 qpair failed and we were unable to recover it. 00:34:23.043 [2024-07-15 19:40:33.578440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.043 [2024-07-15 19:40:33.578458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.578662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.578678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.578809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.578825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.579106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.579433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.579466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.579784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.579817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.580119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.580156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.580468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.580501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.580817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.580849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.581145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.581177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.581499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.581532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.581759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.581792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.582026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.582058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.582293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.582487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.582504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.582774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.582805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.583033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.583065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.583292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.583310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.583511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.583528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.583820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.583851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.584088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.584120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.584416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.584434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.584694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.584711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.584910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.584941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.585255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.585289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.585517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.585548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.585776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.585808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.586063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.586095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.586387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.586419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.586670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.586702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.586951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.586982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.587245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.587278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.587587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.587618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.587945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.587977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.588279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.588315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.588625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.588657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.588966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.588997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.589256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.589291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.589609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.589641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.589967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.589999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.044 [2024-07-15 19:40:33.590237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.044 [2024-07-15 19:40:33.590271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.044 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.590512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.590544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.590858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.590891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.591188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.591219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.591530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.591564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.591810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.591842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.592177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.592214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.592391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.592423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.592746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.592778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.593035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.593068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.593403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.593437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.593752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.593783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.594076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.594109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.594424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.594457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.594792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.594824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.595122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.595155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.595335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.595369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.595659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.595691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.595993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.596025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.596247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.596264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.596525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.596571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.596833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.596865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.597249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.597282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.597618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.597961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.597993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.598271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.598305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.598598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.598629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.598969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.599002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.599315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.599348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.599643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.599675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.599931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.599963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.600281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.600315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.600565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.600596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.600893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.600926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.601245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.601282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.601558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.601848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.601880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.602109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.602141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.602393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.602426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.602720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.602752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.603085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.045 [2024-07-15 19:40:33.603117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.045 qpair failed and we were unable to recover it. 00:34:23.045 [2024-07-15 19:40:33.603431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.603465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.603722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.603755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.604046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.604079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.604321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.604354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.604533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.604920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.604959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.605216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.605267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.605560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.605577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.605791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.605823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.606117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.606149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.606460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.606478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.606755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.606787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.607125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.607473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.607505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.607727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.607759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.607981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.608013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.608319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.608336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.608540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.608558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.608816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.608862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.609215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.609256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.609574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.609607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.609844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.609876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.610242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.610276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.610589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.610622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.610940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.610972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.611268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.611286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.611476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.611493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.611691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.611707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.611904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.611921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.612182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.612199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.612510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.612543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.612801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.612833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.613130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.613163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.613478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.613511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.613736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.613768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.614081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.614126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.614403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.614421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.614741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.614773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.615080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.615112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.615346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.615363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.615616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.046 [2024-07-15 19:40:33.615633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.046 qpair failed and we were unable to recover it. 00:34:23.046 [2024-07-15 19:40:33.615926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.615958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.616276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.616309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.616550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.616583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.616851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.616882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.617186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.617223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.617552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.617586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.617901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.617933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.618162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.618194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.618444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.618462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.618740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.618757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.618944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.618961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.619178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.619194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.619440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.619457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.619660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.619676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.619968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.620000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.620337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.620371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.620664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.620697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.620991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.621024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.621274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.621314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.621587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.621631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.621921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.621953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.622271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.622304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.622545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.622577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.622824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.622855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.623164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.623196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.623443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.623476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.623724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.623757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.624011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.624042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.624383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.624431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.624718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.624755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.624979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.625010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.625307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.625341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.625579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.625611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.047 [2024-07-15 19:40:33.625836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.047 [2024-07-15 19:40:33.625867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.047 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.626174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.626206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.626495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.626527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.626842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.626874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.627109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.627141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.627454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.627471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.627783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.627815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.628150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.628183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.628428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.628463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.628706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.628738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.628962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.628994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.629292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.629332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.629571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.629868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.629901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.630243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.630277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.630590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.630936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.630968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.631213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.631267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.631491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.631523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.631842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.631874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.632120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.632137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.632444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.632476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.632793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.632825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.633142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.633174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.633517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.633551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.633812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.633843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.634179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.634210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.634479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.634512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.634748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.634779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.635090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.635137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.635344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.635362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.635674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.635706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.636023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.636054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.636288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.636305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.636544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.636575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.636754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.636786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.637017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.637049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.637272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.637306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.637539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.637571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.637750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.637783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.638024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.638056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.048 [2024-07-15 19:40:33.638363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.048 [2024-07-15 19:40:33.638397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.048 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.638696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.638728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.638973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.639005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.639180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.639211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.639489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.639521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.639813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.639845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.640161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.640192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.640516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.640533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.640817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.640833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.641035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.641052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.641345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.641383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.641602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.641954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.641985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.642275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.642293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.642507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.642538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.642812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.642844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.643085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.643116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.643433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.643466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.643694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.643726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.643953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.643985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.644296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.644313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.644454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.644470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.644743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.644760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.644969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.644985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.645119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.645136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.645356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.645373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.645630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.645662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.645902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.645935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.646171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.646188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.646463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.646480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.646736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.646752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.647052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.647083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.647349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.647381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.647722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.647753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.648013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.648044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.648286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.648319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.648580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.648597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.648879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.648911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.649135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.649166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.649408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.649425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.649684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.649716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.049 [2024-07-15 19:40:33.649909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.049 [2024-07-15 19:40:33.649942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.049 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.650241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.650273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.650520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.650552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.650893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.650925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.651239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.651256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.651569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.651604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.651766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.651796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.652015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.652046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.652296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.652313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.652522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.652542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.652822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.652853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.653191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.653223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.653470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.653502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.653814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.653845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.654188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.654219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.654544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.654577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.654847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.654879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.655213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.655254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.655540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.655571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.655860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.655891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.656105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.656134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.656440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.656474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.656718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.656750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.657042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.657075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.657414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.657446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.657780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.657812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.658045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.658077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.658343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.658376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.658612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.658645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.658912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.658944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.659282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.659315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.659619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.659658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.659894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.659926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.660241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.660274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.660552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.660585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.660857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.661260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.661338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.661619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.661658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.661924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.661938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.662249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.662284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.662584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.662617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.050 qpair failed and we were unable to recover it. 00:34:23.050 [2024-07-15 19:40:33.662931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.050 [2024-07-15 19:40:33.662962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.663173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.663528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.663561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.663860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.663891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.664197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.664239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.664526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.664558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.664871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.664902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.665198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.665240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.665527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.665564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.665792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.665824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.666112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.666143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.666399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.666433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.666741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.666773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.666989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.667359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.667392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.667635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.667667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.667911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.667944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.668200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.668239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.668498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.668530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.668751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.668783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.669032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.669064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.669220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.669237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.669434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.669467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.669722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.669754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.670048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.670080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.670395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.670429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.670747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.670779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.671113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.671146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.671367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.671381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.671565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.671598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.671834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.671866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.672164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.672197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.672504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.672579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.672821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.672839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.673050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.673067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.673288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.673303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.673558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.673599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.673823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.673855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.674085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.674117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.051 qpair failed and we were unable to recover it. 00:34:23.051 [2024-07-15 19:40:33.674329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.051 [2024-07-15 19:40:33.674342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.674527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.674559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.674806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.674839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.675128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.675160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.675511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.675544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.675860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.675892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.676215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.676256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.676541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.676554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.676762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.676775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.676994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.677009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.677294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.677328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.677662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.677694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.677948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.677980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.678273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.678307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.678637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.678669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.678906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.678938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.679189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.679221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.679499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.679545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.679798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.680157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.680190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.680520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.680554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.680869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.680901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.681133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.681165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.681443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.681477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.681768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.681800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.682114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.682146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.682440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.682475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.682650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.682662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.682843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.682874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.683130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.683162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.683476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.683490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.683615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.683626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.683845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.683857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.683966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.683977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.684263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.684296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.684538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.684570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.684808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.684841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.052 qpair failed and we were unable to recover it. 00:34:23.052 [2024-07-15 19:40:33.685151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.052 [2024-07-15 19:40:33.685182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.685528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.685576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.685890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.685922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.686250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.686285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.686549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.686580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.686811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.686843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.687131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.687163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.687429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.687462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.687713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.687744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.688034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.688065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.688294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.688327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.688487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.688518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.688770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.688806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.689147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.689180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.689510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.689543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.689848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.689881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.690187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.690219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.690419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.690452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.690696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.690728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.690990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.691023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.691250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.691284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.691525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.691558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.691811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.691843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.692132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.692164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.692482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.692515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.692806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.692838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.693155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.693187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.693446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.693480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.693703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.693735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.694052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.694371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.694384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.694654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.694667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.694962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.694975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.695265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.695299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.695551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.695583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.695918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.695949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.696170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.696202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.696593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.696625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.696891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.696922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.697110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.697142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.697385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.697418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.053 qpair failed and we were unable to recover it. 00:34:23.053 [2024-07-15 19:40:33.697647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.053 [2024-07-15 19:40:33.697679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.697997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.698028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.698187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.698219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.698474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.698507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.698818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.698849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.699145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.699177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.699512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.699865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.699897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.700204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.700248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.700559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.700591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.700904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.700935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.701251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.701267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.701577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.701610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.701835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.701868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.702088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.702120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.702367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.702400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.702693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.702706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.702977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.702990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.703241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.703267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.703465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.703478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.703738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.703770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.704061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.704092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.704329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.704362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.704673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.704705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.705020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.705051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.705350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.705382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.705617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.705648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.705979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.706011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.706311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.706345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.706653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.706685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.706924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.706956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.707295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.707328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.707549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.707860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.707892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.708117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.708149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.708444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.708478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.708713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.708744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.708974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.709006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.709319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.709354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.709583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.709614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.709905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.054 [2024-07-15 19:40:33.709932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.054 qpair failed and we were unable to recover it. 00:34:23.054 [2024-07-15 19:40:33.710156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.710187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.710474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.710508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.710662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.710692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.710993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.711025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.711263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.711296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.711592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.711630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.711876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.712080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.712093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.712271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.712284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.712555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.712588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.712893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.712936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.713167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.713199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.713464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.713477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.713697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.713730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.713894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.713927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.714083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.714115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.714404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.714438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.714752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.714791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.715083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.715115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.715342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.715356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.715557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.715570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.715874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.715906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.716142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.716174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.716415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.716429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.716698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.716711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.716997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.717029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.717249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.717565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.717596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.717811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.717842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.718155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.718187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.718531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.718546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.718759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.718791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.719111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.719144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.719433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.719468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.719702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.719715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.719979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.720011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.720250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.720612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.720625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.720803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.720816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.721096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.721129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.721367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.055 [2024-07-15 19:40:33.721401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.055 qpair failed and we were unable to recover it. 00:34:23.055 [2024-07-15 19:40:33.721705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.721737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.722093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.722125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.722366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.722400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.722635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.722667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.722909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.722940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.723249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.723282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.723590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.723622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.723829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.723842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.723969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.724002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.724296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.724329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.724553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.724585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.724834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.725109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.725141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.725385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.725399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.725603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.725616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.725812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.725824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.726103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.726117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.726363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.726513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.726526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.726801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.726833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.727133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.727166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.727415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.727449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.727786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.727818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.728115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.728147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.728491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.728527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.728831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.728844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.729117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.729131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.729343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.729357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.729567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.729598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.729921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.729953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.730251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.730284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.730605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.730618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.730820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.730834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.730961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.730973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.731194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.731249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.056 [2024-07-15 19:40:33.731575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.056 [2024-07-15 19:40:33.731608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.056 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.731932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.731969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.732154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.732186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.732443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.732477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.732716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.732729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.732982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.732995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.733107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.733120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.733391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.733426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.733583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.733616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.733837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.733868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.734113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.734447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.734482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.734736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.734768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.735089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.735120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.735371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.735405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.735587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.735619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.735922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.735954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.736246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.736280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.736600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.736632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.736916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.736948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.737202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.737244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.737506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.737539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.737715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.737729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.737908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.737920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.738189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.738221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.738483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.738515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.738794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.739135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.739168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.739474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.739507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.739808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.739840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.740150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.740183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.740451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.740741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.740754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.740943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.741267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.741301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.741613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.741646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.741950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.741982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.742217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.742273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.742494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.742541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.742663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.742676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.742892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.742924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.057 qpair failed and we were unable to recover it. 00:34:23.057 [2024-07-15 19:40:33.743171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.057 [2024-07-15 19:40:33.743209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.743554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.743587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.743806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.743838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.744074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.744106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.744423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.744456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.744764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.744796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.745100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.745133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.745444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.745478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.745787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.745819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.746057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.746089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.746348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.746382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.746642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.746683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.746891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.746904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.747145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.747157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.747340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.747354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.747629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.747662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.747896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.747928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.748104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.748137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.748431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.748465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.748629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.748642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.748835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.748867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.749110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.749142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.749432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.749444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.749656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.749688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.749939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.749971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.750140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.750172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.750437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.750451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.750664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.750678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.750980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.751012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.751323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.751505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.751519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.751795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.751827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.752121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.752153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.752468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.752502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.752835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.752867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.753019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.753051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.753293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.753326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.753643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.753676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.753983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.754015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.754318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.754351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.754610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.754647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.058 qpair failed and we were unable to recover it. 00:34:23.058 [2024-07-15 19:40:33.754989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.058 [2024-07-15 19:40:33.755021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.755247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.755280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.755596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.755628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.755941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.755974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.756243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.756276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.756498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.756530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.756762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.756794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.757110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.757142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.757364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.757398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.757710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.757743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.758046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.758078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.758387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.758421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.758649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.758681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.759006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.759039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.759317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.759351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.759574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.759605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.759825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.759856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.760178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.760210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.760457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.760491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.760735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.760767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.761116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.761148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.761411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.761424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.761571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.761585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.761809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.761822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.762100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.762132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.762350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.762384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.762714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.762746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.762983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.763015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.763315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.763349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.763572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.763613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.763848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.763861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.764185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.764217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.764452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.764485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.764806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.764837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.765061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.765093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.765387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.765421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.765735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.765767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.766125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.766158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.766476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.766508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.766783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.766820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.767138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.767169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.059 qpair failed and we were unable to recover it. 00:34:23.059 [2024-07-15 19:40:33.767463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.059 [2024-07-15 19:40:33.767476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.767741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.767772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.768012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.768044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.768295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.768328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.768600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.768632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.768874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.768887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.769172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.769204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.769451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.769483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.769713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.769917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.769949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.770172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.770204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.770507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.770540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.770841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.770872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.771179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.771211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.771554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.771586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.771898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.771929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.772242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.772275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.772514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.772547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.772784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.772816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.773131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.773163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.773499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.773532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.773772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.773804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.774069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.774100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.774408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.774441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.774665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.774697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.774953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.774985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.775259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.775293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.775548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.775579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.775878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.775890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.776080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.776092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.776295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.776328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.776553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.776585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.776829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.776861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.777177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.777208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.777467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.777500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.060 [2024-07-15 19:40:33.777786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.060 [2024-07-15 19:40:33.777818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.060 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.778129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.778161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.778458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.778492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.778752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.778789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.779035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.779066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.779284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.779317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.779620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.779651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.779873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.779905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.780152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.780185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.780425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.780460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.780754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.781040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.781071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.781415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.781449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.781709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.781741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.782057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.782069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.782261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.782275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.782560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.782592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.782944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.782976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.783292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.783325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.783575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.783606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.783944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.783955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.784177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.784382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.784394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.784641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.784654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.784929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.784961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.785202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.785244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.785595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.785627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.785964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.785976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.786176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.786206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.786505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.786538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.786760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.786792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.786959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.786991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.787303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.787337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.787503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.787536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.787698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.787731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.787973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.787985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.788204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.788217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.788402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.788416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.788671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.788704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.789021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.789055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.789337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.789370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.061 qpair failed and we were unable to recover it. 00:34:23.061 [2024-07-15 19:40:33.789692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.061 [2024-07-15 19:40:33.789726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.789960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.789974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.790254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.790294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.790542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.790575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.790742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.790755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.790966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.790998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.791325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.791358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.791531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.791563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.791824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.791856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.792148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.792180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.792552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.792586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.792825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.792857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.793136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.793169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.793479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.793513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.793802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.793834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.794091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.794123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.794376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.794411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.794580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.794611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.794786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.794820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.795049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.795083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.795391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.795439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.795623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.795637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.795892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.795924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.796157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.796188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.796431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.796464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.796816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.796849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.797042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.797073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.797320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.797363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.797566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.797579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.797762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.797775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.797973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.798006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.798268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.798302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.798534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.798567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.798804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.798846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.799028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.799041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.799239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.799251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.799457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.799487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.799704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.799735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.799971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.800003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.800245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.800278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.800501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.800532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.062 qpair failed and we were unable to recover it. 00:34:23.062 [2024-07-15 19:40:33.800763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.062 [2024-07-15 19:40:33.800795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.800972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.801008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.801242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.801275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.801516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.801549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.801844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.801875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.802101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.802133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.802436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.802473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.802706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.802739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.803041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.803074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.803405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.803419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.803572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.803585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.803728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.803763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.804077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.804108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.804426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.804459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.804687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.804718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.805059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.805090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.806556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.806588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.806913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.806949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.807254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.807288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.807571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.807603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.807928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.807960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.808278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.808311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.808525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.808538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.808742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.808774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.808955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.808988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.809218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.809266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.809450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.809481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.809711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.809756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.809895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.809908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.810022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.810067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.810299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.810333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.810594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.810626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.814272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.814305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.814475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.814488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.814760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.814774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.815001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.815014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.815282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.815296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.815496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.815509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.815759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.815773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.816033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.816047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.816302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.816316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.063 qpair failed and we were unable to recover it. 00:34:23.063 [2024-07-15 19:40:33.816565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.063 [2024-07-15 19:40:33.816582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.816715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.816729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.816929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.816942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.817232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.817247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.817433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.817447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.817648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.817661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.817808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.817822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.818020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.818033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.818290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.818311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.818592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.818607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.818894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.818909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.819113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.819127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.819330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.819345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.819566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.819579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.819849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.819865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.820133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.820146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.820277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.820289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.820402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.820415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.820617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.820630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.820945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.820959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.821234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.821248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.821445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.821460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.821710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.821724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.821930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.821942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.822136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.822150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.822281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.822293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.822495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.822508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.822641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.822654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.822867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.822881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.823097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.823113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.823296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.823310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.823487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.823500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.823696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.823709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.823906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.823919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.824109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.824123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.824348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.064 [2024-07-15 19:40:33.824361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.064 qpair failed and we were unable to recover it. 00:34:23.064 [2024-07-15 19:40:33.824582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.824595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.824737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.824753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.825024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.825039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.825264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.825279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.826510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.826537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.826819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.826833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.827595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.827621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.827910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.827925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.828124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.828137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.828327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.828341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.828565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.828578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.828715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.828728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.828964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.828978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.829242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.829256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.829453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.829465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.829656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.829669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.829801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.829814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.829941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.829954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.830236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.830252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.830400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.830414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.830658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.830672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.830928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.831123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.831140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.831396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.831413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.831547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.831562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.831703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.831717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.831979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.831993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.832273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.832287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.832537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.832551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.832736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.832749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.832889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.832902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.833116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.834752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.834783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.835007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.835024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.835322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.835339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.835552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.835571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.835763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.835779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.835893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.835909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.836128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.836144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.836254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.836269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.836466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.065 [2024-07-15 19:40:33.836485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.065 qpair failed and we were unable to recover it. 00:34:23.065 [2024-07-15 19:40:33.836640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.836657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.836814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.836831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.837192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.837331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.837353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.837603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.837619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.837805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.837823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.838029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.838045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.838276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.838293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.838495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.838515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.838657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.838672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.838812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.838828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.839027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.839045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.839249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.839266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.839475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.839492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.839690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.839707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.840053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.840071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.840279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.840433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.840450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.840648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.840667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.840861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.840879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.841013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.841031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.841301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.841321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.841498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.841717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.841736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.842049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.842066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.842334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.842352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.842597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.842614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.842806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.842824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.843038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.843055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.843333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.843350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.844785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.844815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.845127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.845158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.845362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.845380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.846409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.846439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.846610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.846627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.846890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.846906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.847151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.847168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.847315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.847331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.847594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.847611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.847751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.847767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.066 [2024-07-15 19:40:33.847964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.066 [2024-07-15 19:40:33.847980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.066 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.848255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.848271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.848470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.848485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.848698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.848718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.849063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.849292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.849309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.849495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.849511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.849667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.849973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.849990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.850187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.850203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.850351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.850369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.850567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.850585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.850879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.850897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.851144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.851162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.851358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.851375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.851597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.851613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.851755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.851771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.852068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.852085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.852294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.852310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.852454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.852470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.852612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.852628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.852811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.852830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.853804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.853830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.854957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.854973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.855897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.855913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.856138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.856153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.067 [2024-07-15 19:40:33.856348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.067 [2024-07-15 19:40:33.856364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.067 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.856538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.856554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.856745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.856765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.856962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.856978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.857166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.857182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.857389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.857405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.857539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.857555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.857695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.857712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.857832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.857847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.858908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.858920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.859964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.859976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.860149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.860162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.860284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.860295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.860406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.860417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.860533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.860544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.861347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.861370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.861578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.861592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.861865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.861877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.862880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.862892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.863143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.863279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.863476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.068 [2024-07-15 19:40:33.863590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.068 qpair failed and we were unable to recover it. 00:34:23.068 [2024-07-15 19:40:33.863775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.863953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.863968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.864186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.864198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.864378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.864392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.865151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.865174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.865336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.865350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.865518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.865542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.865780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.865793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.865969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.865981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.866147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.866353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.866535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.866734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.866855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.866992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.867004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.867156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.867169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.867286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.867298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.867470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.867482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.868212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.868245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.868527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.868541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.868728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.868740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.869578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.869603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.869785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.869799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.870913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.870928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.871879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.871890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.872031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.069 [2024-07-15 19:40:33.872043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.069 qpair failed and we were unable to recover it. 00:34:23.069 [2024-07-15 19:40:33.872214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.872232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.872471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.872483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.872596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.872610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.872690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.872701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.872903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.872915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.873944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.873956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.874953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.874965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.070 [2024-07-15 19:40:33.875861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.070 [2024-07-15 19:40:33.875873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.070 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.875976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.875990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.876958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.876970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.877907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.877919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.878863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.878875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.879058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.879070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.879251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.879264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.356 [2024-07-15 19:40:33.879380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.356 [2024-07-15 19:40:33.879393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.356 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.879509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.879522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.879731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.879857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.879869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.879971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.879982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.880930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.880941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.881828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.881840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.882976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.882988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.883115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.883127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.883243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.883255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.883427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.883446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.883619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.883633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.883799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.883811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.884935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.884948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.885846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.885858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.357 [2024-07-15 19:40:33.886695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.357 qpair failed and we were unable to recover it. 00:34:23.357 [2024-07-15 19:40:33.886800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.886811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.886926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.886938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.887947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.887959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.888841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.888852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.889905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.889916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.890797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.890809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.891058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.891070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.891247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.891259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.891434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.891446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.891639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.891651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.891858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.891870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.892851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.892863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.893109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.893121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.893286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.893299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.893499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.893511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.358 [2024-07-15 19:40:33.893627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.358 [2024-07-15 19:40:33.893640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.358 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.893759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.893772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.894854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.894866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.895987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.895999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.896270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.896283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.896411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.896424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.896548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.896560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.896737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.896749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.897903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.897914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.898038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.898216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.898424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.898621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.898822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.898997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.899008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.899194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.899205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.899462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.899475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.899764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.899775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.899990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.900136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.900366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.900495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.900681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.900858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.900870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.901046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.901057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.901255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.901268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.359 [2024-07-15 19:40:33.901454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.359 [2024-07-15 19:40:33.901465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.359 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.901655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.901667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.901939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.901950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.902165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.902177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.902401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.902413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.902643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.902656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.902862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.902874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.903136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.903149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.903317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.903331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.903564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.903575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.903758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.903770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.903943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.903955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.904068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.904079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.904286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.904298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.904490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.904501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.904689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.904701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.904891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.904902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.905913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.905924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.906157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.906169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.906406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.906418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.906688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.906699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.906958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.906970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.907201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.907213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.907506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.907518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.907628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.907639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.907941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.907953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.908211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.908223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.908349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.908360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.908593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.360 [2024-07-15 19:40:33.908604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.360 qpair failed and we were unable to recover it. 00:34:23.360 [2024-07-15 19:40:33.908787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.908798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.908960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.908971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.909078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.909089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.909353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.909365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.909572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.909584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.909819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.909832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.910025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.910037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.910292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.910304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.910536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.910548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.910663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.910675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.910923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.910934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.911168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.911180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.911384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.911396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.911584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.911612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.911797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.911809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.912082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.912094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.912283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.912295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.912542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.912553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.912659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.912671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.912799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.912811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.913063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.913075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.913255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.913267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.913384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.913396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.913625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.913637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.913846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.913857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.914093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.914107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.914292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.914304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.914589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.914600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.914886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.914897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.915101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.915114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.915303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.915315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.915479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.915491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.915762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.915774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.916004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.916026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.916220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.916236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.916544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.916556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.916725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.916738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.917037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.917049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.917231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.917242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.917449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.917461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.361 [2024-07-15 19:40:33.917693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.361 [2024-07-15 19:40:33.917705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.361 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.917911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.917922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.918103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.918115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.918375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.918387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.918570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.918585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.918844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.918855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.919109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.919121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.919239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.919250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.919445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.919670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.919682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.919810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.919823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.920052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.920064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.920263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.920276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.920458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.920470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.920713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.920725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.921021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.921033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.921221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.921238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.921507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.921519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.921724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.921735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.921966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.921977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.922259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.922271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.922472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.922484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.922673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.922685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.922914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.922926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.923125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.923137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.923320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.923334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.923497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.923509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.923646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.923658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.923863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.923876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.924068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.924079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.924305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.924317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.924503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.924515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.924680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.924692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.924965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.924976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.925146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.925158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.925324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.925337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.925593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.925605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.925861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.925873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.926058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.926070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.926256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.926269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.926450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.926462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.362 qpair failed and we were unable to recover it. 00:34:23.362 [2024-07-15 19:40:33.926631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.362 [2024-07-15 19:40:33.926643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.926844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.926857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.927093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.927106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.927355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.927368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.927509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.927521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.927648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.927660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.927792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.927805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.928042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.928054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.928326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.928338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.928564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.928577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.928807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.928819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.929045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.929058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.929337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.929349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.929522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.929534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.929703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.929714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.929847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.929859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.930112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.930124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.930327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.930340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.930460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.930472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.930706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.930718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.930949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.930961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.931126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.931138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.931320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.931333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.931530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.931542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.931788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.932110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.932122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.932252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.932265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.932438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.932451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.932634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.932646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.932749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.932761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.933034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.933045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.933232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.933242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.933451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.933460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.933693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.933702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.933952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.933961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.934989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.934998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.363 qpair failed and we were unable to recover it. 00:34:23.363 [2024-07-15 19:40:33.935186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.363 [2024-07-15 19:40:33.935195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.935441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.935450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.935680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.935690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.935941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.935950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.936177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.936187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.936390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.936401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.936579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.936589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.936867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.936877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.936989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.936999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.937170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.937180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.937350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.937361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.937612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.937622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.937798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.937808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.937994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.938004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.938260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.938272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.938549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.938560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.938821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.938832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.939015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.939026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.939207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.939217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.939473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.939485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.939659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.939671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.939852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.939864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.940120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.940132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.940314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.940328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.940557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.940569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.940802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.940814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.941013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.941025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.941275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.941288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.941465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.941477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.941672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.941684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.941935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.941947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.942056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.942068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.942326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.942339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.942442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.942452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.942567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.942579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.364 [2024-07-15 19:40:33.942836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.364 [2024-07-15 19:40:33.942848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.364 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.943052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.943063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.943321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.943334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.943520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.943531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.943790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.943803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.944057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.944068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.944239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.944252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.944503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.944514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.944712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.944724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.944905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.945110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.945122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.945357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.945369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.945530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.945543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.945787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.945799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.945971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.945982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.946200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.946241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.946504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.946521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.946758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.946774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.947054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.947069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.947256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.947272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.947461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.947477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.947755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.947771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.948035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.948050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.948310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.948326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.948565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.948581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.948768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.948783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.949095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.949110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.949319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.949335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.949589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.949609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.949742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.949757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.950021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.950036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.950245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.950261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.950454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.950470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.950653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.950669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.950909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.950923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.951169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.951184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.951426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.951442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.951573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.951589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.951692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.951706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.951895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.951913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.952164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.952179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.365 qpair failed and we were unable to recover it. 00:34:23.365 [2024-07-15 19:40:33.952416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.365 [2024-07-15 19:40:33.952432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.952553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.952568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.952756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.952772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.953017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.953031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.953209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.953240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.953453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.953468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.953756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.953772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.954065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.954081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.954406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.954422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.954623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.954638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.954808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.954823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.955005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.955020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.955221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.955242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.955500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.955515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.955813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.955839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.956104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.956119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.956376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.956392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.956582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.956597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.956848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.956862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.957045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.957060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.957248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.957264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.957512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.957527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.957710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.957726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.957966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.957981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.958167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.958183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.958316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.958330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.958513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.958528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.958765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.958785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.959125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.959140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.959344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.959359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.959664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.959679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.959815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.959829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.960110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.960124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.960333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.960349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.960541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.960557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.960701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.960716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.961021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.961036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.961180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.961195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.961411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.961428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.961685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.961700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.961893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.366 [2024-07-15 19:40:33.961907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.366 qpair failed and we were unable to recover it. 00:34:23.366 [2024-07-15 19:40:33.962086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.962102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.962296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.962313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.962467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.962482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.962595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.962610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.962877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.962892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.963174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.963188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.963454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.963470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.963708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.963723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.963931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.963946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.964134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.964150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.964279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.964294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.964526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.964541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.964713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.964728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.965027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.965062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.965335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.965349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.965557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.965570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.965745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.965757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.965886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.965900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.966090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.966102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.966344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.966356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.966545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.966557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.966680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.966692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.966853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.966865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.967127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.967138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.967423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.967436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.967568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.967580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.967834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.967846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.968844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.969940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.969951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.970126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.970138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.970362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.367 [2024-07-15 19:40:33.970373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.367 qpair failed and we were unable to recover it. 00:34:23.367 [2024-07-15 19:40:33.970548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.970560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.970759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.970771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.970982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.970994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.971111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.971123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.971292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.971304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.971503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.971515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.971693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.971705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.972825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.972838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.973032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.973044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.973297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.973310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.973584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.973596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.973732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.973744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.973927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.973938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.974148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.974160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.974392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.974405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.974598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.974609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.974759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.974770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.975054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.975066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.975258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.975270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.975439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.975451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.975626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.975638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.975823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.975836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.976074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.976086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.976303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.976315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.976594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.976779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.976790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.977038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.977049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.977281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.977293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.977405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.977417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.977676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.977688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.977867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.977880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.978141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.978274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.978464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.978678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.368 [2024-07-15 19:40:33.978792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.368 qpair failed and we were unable to recover it. 00:34:23.368 [2024-07-15 19:40:33.978983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.978994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.979123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.979134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.979316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.979328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.979517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.979528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.979667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.979680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.979795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.979806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.980080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.980209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.980443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.980630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.980819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.980985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.981000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.981184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.981195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.981396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.981408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.981665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.981678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.981917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.981929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.982137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.982149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.982329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.982342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.982540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.982553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.982738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.982750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.983026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.983039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.983320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.983333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.983523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.983534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.983654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.983896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.983908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.984082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.984399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.984411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.984580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.984591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.984777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.984788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.985030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.985042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.985229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.985243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.985457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.985469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.985652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.985664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.985913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.985925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.986100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.986111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.369 [2024-07-15 19:40:33.986253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.369 [2024-07-15 19:40:33.986265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.369 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.986376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.986387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.986620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.986632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.986752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.986763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.986937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.986949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.987128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.987140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.987413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.987426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.987641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.987653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.987876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.987888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.988009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.988020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.988188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.988200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.988444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.988456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.988688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.988700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.988876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.988888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.989845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.989857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.990112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.990125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.990313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.990326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.990511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.990523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.990775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.990788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.991917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.991928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.992108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.992120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.992349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.992361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.992545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.992556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.992730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.992742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.992859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.992871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.993043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.993055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.993176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.993188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.993370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.993383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.993581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.993592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.993847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.993859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.994064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.994076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.370 qpair failed and we were unable to recover it. 00:34:23.370 [2024-07-15 19:40:33.994187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.370 [2024-07-15 19:40:33.994199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.994384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.994396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.994652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.994664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.994875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.994886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.995949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.995962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.996154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.996285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.996420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.996601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.996728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.996988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.997002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.997146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.997158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.997445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.997458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.997622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.997633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.997809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.997821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.998071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.998083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.998308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.998320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.998526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.998538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.998713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.998727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.999288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.999301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.999535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.999546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.999732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.999744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:33.999864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:33.999876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.000064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.000076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.000264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.000276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.000508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.000520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.000777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.000789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.000987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.000999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.001277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.001290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.001502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.001514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.001647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.001659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.001859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.001871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.002049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.002061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.002281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.002292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.002485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.002496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.002672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.371 [2024-07-15 19:40:34.002684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.371 qpair failed and we were unable to recover it. 00:34:23.371 [2024-07-15 19:40:34.002832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.002843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.003015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.003026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.003287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.003299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.003463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.003475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.003681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.003692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.003981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.003993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.004112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.004315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.004327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.004446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.004456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.004690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.004701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.004910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.004921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.005049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.005060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.005183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.005196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.005441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.005454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.005633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.005649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.005826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.005838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.006027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.006038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.006153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.006165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.006399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.006411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.006667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.006679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.006843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.006856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.007113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.007125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.007331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.007343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.007508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.007520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.007752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.007764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.007963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.007976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.008146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.008156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.008330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.008342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.008491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.008503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.008675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.008686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.008948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.008960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.009168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.009180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.009381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.009393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.009555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.009567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.009743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.009755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.009945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.009957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.010217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.010233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.010428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.010440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.010624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.010635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.010770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.010781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.010972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.372 [2024-07-15 19:40:34.010984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.372 qpair failed and we were unable to recover it. 00:34:23.372 [2024-07-15 19:40:34.011267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.011280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.011397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.011588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.011600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.011808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.011819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.011995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.012124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.012312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.012584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.012738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.012977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.012990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.013205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.013217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.013372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.013385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.013578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.013590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.013716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.013729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.013965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.013977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.014253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.014266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.014524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.014535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.014656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.014667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.014946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.014959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.015141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.015154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.015341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.015352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.015609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.015621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.015753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.015764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.016910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.016922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.017119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.017131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.017304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.017316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.017422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.017432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.017637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.017649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.017795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.017806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.018083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.018094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.373 [2024-07-15 19:40:34.018266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.373 [2024-07-15 19:40:34.018278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.373 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.018460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.018472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.018669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.018681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.018936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.018948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.019987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.019999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.020181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.020192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.020382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.020394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.020575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.020587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.020700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.020711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.020872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.020884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.021087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.021099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.021329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.021342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.021510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.021522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.021650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.021663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.021814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.021825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.022029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.022041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.022230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.022497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.022509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.022742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.022754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.023050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.023062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.023245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.023258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.023463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.023474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.023734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.023746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.024089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.024101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.024281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.024460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.024473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.024657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.024669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.024851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.024864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.025839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.025850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.026106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.026118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.026296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.026308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.374 qpair failed and we were unable to recover it. 00:34:23.374 [2024-07-15 19:40:34.026435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.374 [2024-07-15 19:40:34.026447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.026632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.026644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.026897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.026909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.027089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.027101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.027351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.027364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.027499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.027511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.027687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.027699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.027944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.027957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.028236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.028248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.028435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.028446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.028577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.028589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.028786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.028798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.028990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.029002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.029190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.029203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.029497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.029509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.029641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.029652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.029886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.029899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.030155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.030168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.030431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.030443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.030650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.030661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.030850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.030862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.031116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.031379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.031559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.031571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.031758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.031770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.032864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.032877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.033024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.033036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.033151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.033162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.033302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.033314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.033477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.033488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.033751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.033763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.034033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.034045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.034303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.034316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.034549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.034560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.034750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.034762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.034921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.375 [2024-07-15 19:40:34.034932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.375 qpair failed and we were unable to recover it. 00:34:23.375 [2024-07-15 19:40:34.035204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.035216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.035393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.035405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.035578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.035591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.035776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.035788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.035992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.036004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.036245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.036257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.036440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.036452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.036631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.036643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.036814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.036826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.037871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.037883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.038085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.038096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.038333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.038347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.038524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.038768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.038779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.038975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.038987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.039239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.039252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.039454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.039465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.039652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.039664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.039835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.039847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.040112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.040124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.040334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.040346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.040484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.040496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.040664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.040675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.040881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.040893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.041007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.041019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.041304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.041317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.041498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.041510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.041649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.041661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.041893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.041904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.042085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.042097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.042330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.042342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.042455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.042467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.042647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.042659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.042905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.042917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.043089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.043276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.376 [2024-07-15 19:40:34.043289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.376 qpair failed and we were unable to recover it. 00:34:23.376 [2024-07-15 19:40:34.043471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.043483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.043614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.043626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.043812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.043825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.043989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.044001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.044260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.044272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.044454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.044466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.044660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.044672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.044849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.044861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.045831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.045843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.046083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.046095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.046214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.046235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.046386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.046397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.046525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.046536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.046821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.046833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.047040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.047052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.047282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.047294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.047426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.047439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.047642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.047654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.047940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.047952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.048214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.048230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.048439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.048451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.048634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.048645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.048776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.048788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.049068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.049080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.049365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.049377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.049543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.049556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.049754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.049766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.050122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.050134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.050390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.050403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.050586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.377 [2024-07-15 19:40:34.050597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.377 qpair failed and we were unable to recover it. 00:34:23.377 [2024-07-15 19:40:34.050781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.050792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.050967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.050978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.051233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.051246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.051413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.051425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.051608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.051620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.051804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.051816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.052140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.052152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.052420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.052432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.052542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.052555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.052805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.052817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.053026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.053037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.053234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.053247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.053430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.053442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.053632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.053644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.053917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.053929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.054091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.054103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.054233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.054245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.054421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.054433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.054648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.054661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.054846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.054858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.055974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.055985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.056960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.056972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.057138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.057150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.057383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.057394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.057574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.057586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.057707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.057720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.057929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.057940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.058139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.058151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.058371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.058384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.058562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.058574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.378 [2024-07-15 19:40:34.058760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.378 [2024-07-15 19:40:34.058772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.378 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.058966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.058978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.059158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.059170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.059402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.059415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.059581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.059593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.059829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.059840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.059954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.059966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.060162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.060288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.060419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.060555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.060740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.060992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.061283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.061462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.061660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.061785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.061962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.061974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.062141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.062153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.062372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.062384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.062569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.062583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.062781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.062792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.062909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.062920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.063101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.063113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.063366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.063378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.063555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.063567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.063750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.063764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.064061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.064073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.064179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.064190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.064441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.064453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.064625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.064637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.064754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.064765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.065004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.065016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.065271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.065283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.065475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.065486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.065741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.065752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.065884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.065896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.066039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.066050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.066216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.066244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.066536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.066548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.066664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.066676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.379 [2024-07-15 19:40:34.066917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.379 [2024-07-15 19:40:34.066929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.379 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.067194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.067206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.067387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.067399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.067529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.067541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.067718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.067729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.068000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.068012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.068272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.068284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.068485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.068497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.068697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.068709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.068919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.068931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.069102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.069114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.069309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.069321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.069446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.069457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.069627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.069639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.069805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.069817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.070067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.070079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.070261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.070273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.070458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.070470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.070709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.070721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.070899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.070911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.071183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.071195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.071294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.071305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.071434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.071445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.071629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.071641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.071857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.071868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.072036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.072048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.072179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.072190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.072423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.072436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.072686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.072699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.072870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.072882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.073047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.073060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.073294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.073305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.073490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.073502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.073669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.073681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.073892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.073905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.074170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.074183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.074415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.074428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.074687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.074699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.074967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.074978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.075240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.075252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.075448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.075461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.380 [2024-07-15 19:40:34.075588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.380 [2024-07-15 19:40:34.075600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.380 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.075727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.075739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.075978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.075991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.076198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.076209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.076413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.076425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.076591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.076604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.076800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.076812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.077840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.077851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.078984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.078995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.079172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.079184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.079381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.079394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.079649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.079661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.079836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.079848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.080056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.080335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.080348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.080514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.080526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.080716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.080728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.080917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.080928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.081187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.081198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.081331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.081343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.081456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.081468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.081641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.081890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.081902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.082074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.082086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.082323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.381 [2024-07-15 19:40:34.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.381 qpair failed and we were unable to recover it. 00:34:23.381 [2024-07-15 19:40:34.082525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.082537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.082736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.082747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.082952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.082963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.083136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.083148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.083331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.083343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.083528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.083540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.083742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.083754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.083886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.083898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.084103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.084115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.084346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.084359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.084561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.084575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.084847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.084859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.085061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.085073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.085306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.085318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.085556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.085567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.085771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.085783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.086967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.086980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.087213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.087229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.087428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.087440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.087552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.087565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.087701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.087713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.087902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.087914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.088109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.088281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.088293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.088403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.088414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.088646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.088658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.088775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.088787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.089049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.089061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.089269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.089281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.089400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.089412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.089584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.089596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.089782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.089795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.090027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.090039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.090205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.090218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.090424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.090436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.382 [2024-07-15 19:40:34.090712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.382 [2024-07-15 19:40:34.090724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.382 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.090858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.090869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.091146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.091158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.091343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.091355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.091543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.091555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.091833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.091845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.092099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.092110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.092280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.092292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.092418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.092683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.092695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.092937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.092952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.093073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.093085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.093291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.093303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.093562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.093575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.093806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.093818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.094000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.094012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.094252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.094500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.094512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.094678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.094690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.094930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.094942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.095147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.095272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.095466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.095661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.095857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.095996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.096177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.096372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.096549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.096674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.096922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.096934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.097203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.097215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.097364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.097376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.097543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.097682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.097694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.097968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.097980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.098171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.098388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.098626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.098749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.383 qpair failed and we were unable to recover it. 00:34:23.383 [2024-07-15 19:40:34.098990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.383 [2024-07-15 19:40:34.099001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.099242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.099254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.099488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.099501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.099686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.099698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.099809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.099821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.099997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.100261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.100374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.100512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.100708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.100979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.100991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.101156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.101168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.101338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.101350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.101567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.101579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.101739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.101751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.101939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.102209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.102221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.102374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.102386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.102562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.102573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.102808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.102820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.103906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.104922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.104934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.105981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.105993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.106177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.106188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.106361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.106373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.106526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.384 [2024-07-15 19:40:34.106538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.384 qpair failed and we were unable to recover it. 00:34:23.384 [2024-07-15 19:40:34.106722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.106735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.106909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.106921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.107152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.107163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.107329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.107341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.107550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.107562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.107763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.107775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.108012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.108024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.108755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.108775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.109052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.109066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.109279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.109291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.109536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.109548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.109736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.109749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.109948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.109960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.110094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.110105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.110309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.110321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.110518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.110530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.110739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.110751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.110976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.110988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.111173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.111185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.111404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.111416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.111543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.111554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.111669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.111681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.111812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.112944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.112955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.113897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.113909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.114863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.385 [2024-07-15 19:40:34.114875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.385 qpair failed and we were unable to recover it. 00:34:23.385 [2024-07-15 19:40:34.115042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.115055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.115337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.115350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.115516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.115528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.115642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.115653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.115814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.116119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.116130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.116332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.116344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.116575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.116589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.116766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.116778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.116983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.116996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.117191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.117203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.117313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.117324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.117503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.117515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.117688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.117700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.117827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.117839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.118963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.118975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.119236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.119249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.119442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.119454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.119581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.119591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.119798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.119811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.120935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.120946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.121180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.121192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.121397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.121409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.121614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.121626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.121759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.386 [2024-07-15 19:40:34.121772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.386 qpair failed and we were unable to recover it. 00:34:23.386 [2024-07-15 19:40:34.122067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.122081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.122204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.122216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.122427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.122441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.122579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.122591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.122824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.122836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.123023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.123212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.123228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.123400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.123613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.123625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.123832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.123845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.124802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.124814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.125915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.125928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.126978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.126991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.127901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.127913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.128143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.128155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.128388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.128400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.128540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.128553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.128666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.128679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.128834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.128846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.129006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.129018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.387 [2024-07-15 19:40:34.129259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.387 [2024-07-15 19:40:34.129271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.387 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.129362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.129374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.129554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.129570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.129699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.129710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.129824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.129837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.130966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.130978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.131118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.131130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.131240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.131254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.131458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.131470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.131638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.131649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.131922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.131933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.132120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.132131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.132331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.132343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.132530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.132542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.132779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.132792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.132979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.132993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.133107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.133119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.133367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.133379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.133555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.133567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.133759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.133770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.134105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.134117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.134304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.134317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.134552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.134565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.134796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.134809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.135013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.135195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.135393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.135582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.135723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.135996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.136179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.136297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.136473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.136718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.136923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.136936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.137133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.137146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.388 [2024-07-15 19:40:34.137407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.388 [2024-07-15 19:40:34.137420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.388 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.137540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.137552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.137683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.137694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.137912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.137924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.138107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.138118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.138309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.138321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.138496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.138508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.138709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.138721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.138911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.138923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.139060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.139071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.139257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.139269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.139385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.139399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.139580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.139592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.139792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.139805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.140880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.140892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.141074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.141086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.141282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.141294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.141480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.141491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.141625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.141637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.141808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.141820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.142007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.142019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.142202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.142214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.142384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.142410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.142593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.142608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.142903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.142918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.143129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.143144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.143344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.143361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.143533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.143549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.143791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.143807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.144092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.144107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.144343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.144359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.144536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.144552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.144745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.144761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.144878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.144898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.145076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.145091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.145280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.145296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.145491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.389 [2024-07-15 19:40:34.145506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.389 qpair failed and we were unable to recover it. 00:34:23.389 [2024-07-15 19:40:34.145630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.145646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.145831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.145847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.146088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.146103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.146295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.146311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.146424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.146438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.146585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.146600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.146790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.146806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.147005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.147021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.147198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.147214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.147341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.147555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.147571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.147832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.147847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.148035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.148051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.148314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.148330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.148521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.148537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.148722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.148739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.148876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.148891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.149130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.149146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.149335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.149351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.149557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.149572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.149762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.149778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.149978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.149994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.150204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.150219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.150441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.150458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.150697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.150713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.150940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.150956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.151148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.151163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.151363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.151379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.151587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.151603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.151782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.151797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.151980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.151996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.152187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.152202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.152357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.152373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.152549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.152564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.152709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.152724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.390 qpair failed and we were unable to recover it. 00:34:23.390 [2024-07-15 19:40:34.153031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.390 [2024-07-15 19:40:34.153047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.153304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.153320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.153511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.153526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.153767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.153783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.154004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.154020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.154204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.154219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.154426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.154442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.154571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.154586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.154774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.154789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.155030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.155252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.155268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.155454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.155469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.155605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.155621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.155841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.155857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.156116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.156132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.156320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.156337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.156532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.156547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.156695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.156843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.156859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.157035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.157051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.157237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.157252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.157440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.157456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.157649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.157951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.157965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.158157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.158173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.158361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.158377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.158520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.158536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.158656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.158672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.158867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.158882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.159030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.159063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.159273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.159287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.159460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.159471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.159649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.159660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.159929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.159940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.160102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.160113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.160345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.160357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.160612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.160624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.160830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.160840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.161043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.161055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.161285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.161297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.161491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.161503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.391 qpair failed and we were unable to recover it. 00:34:23.391 [2024-07-15 19:40:34.161734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.391 [2024-07-15 19:40:34.161745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.161870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.161885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.162117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.162128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.162336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.162347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.162543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.162732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.162743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.162877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.162889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.163140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.163151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.163331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.163344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.163599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.163610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.163808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.163819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.164000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.164012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.164256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.164268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.164529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.164541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.164714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.164726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.164966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.164977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.165161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.165172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.165380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.165391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.165576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.165587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.165776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.165787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.165956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.165968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.166235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.166246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.166373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.166385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.166565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.166778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.166790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.166975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.166986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.167108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.167119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.167410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.167422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.167654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.167666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.167836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.167848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.168136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.168148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.168392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.168404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.168651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.168662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.168784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.168795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.169079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.169272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.169284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.169495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.169506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.169681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.169692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.169813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.169825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.170079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.170090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.170375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.392 [2024-07-15 19:40:34.170387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.392 qpair failed and we were unable to recover it. 00:34:23.392 [2024-07-15 19:40:34.170641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.170654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.170792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.170804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.171095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.171107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.171212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.171236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.171363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.171374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.171581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.171592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.171769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.171780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.172063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.172074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.172324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.172336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.172461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.172473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.172575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.172588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.172776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.172788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.173022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.173033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.173166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.173178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.173363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.173606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.173618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.173749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.173760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.174047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.174059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.174270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.174282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.174457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.174470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.174601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.174613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.174790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.174802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.175002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.175014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.175200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.175211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.175499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.175511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.175776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.175787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.175921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.175932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.176164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.176176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.176365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.176377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.176543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.176554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.176741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.176753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.176881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.176893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.177071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.177083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.177251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.177263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.177433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.177446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.177678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.177689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.177899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.177910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.178164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.178175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.178303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.178315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.178498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.178509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.178676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.393 [2024-07-15 19:40:34.178691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.393 qpair failed and we were unable to recover it. 00:34:23.393 [2024-07-15 19:40:34.178901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.178912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.179119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.179130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.179382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.179394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.179519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.179531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.179767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.179779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.179989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.180001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.180236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.180249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.180427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.180440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.180584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.180597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.180838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.180850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.181963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.181975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.182957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.182969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.183149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.183161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.183380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.183392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.183640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.183652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.183823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-15 19:40:34.183835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.394 qpair failed and we were unable to recover it. 00:34:23.394 [2024-07-15 19:40:34.184025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.184039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.184154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.184168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.184388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.184401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.184592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.184604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.184770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.184782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.185046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.185058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.185232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.185243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.186077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.186097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.186351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.186364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.186547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.186559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.186682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.186694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.186906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.186918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.187125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.187137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.187266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.187279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.187477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.187489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.187691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.187702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.187915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.187926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.188058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.188069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.188205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.188216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.675 qpair failed and we were unable to recover it. 00:34:23.675 [2024-07-15 19:40:34.188338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.675 [2024-07-15 19:40:34.188350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.188533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.188544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.188788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.188799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.188999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.189135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.189326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.189468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.189620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.189823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.189835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.190020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.190031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.190243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.190255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.190370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.190382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.190630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.190642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.190902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.191165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.191177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.191367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.191379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.191564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.191576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.191695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.191708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.191876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.191888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.192136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.192148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.192385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.192398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.192591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.192604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.192795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.192807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.193974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.193985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.194973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.194984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.195161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.195343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.195356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.195545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.195556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.195721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.195733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.195947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.676 [2024-07-15 19:40:34.195960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.676 qpair failed and we were unable to recover it. 00:34:23.676 [2024-07-15 19:40:34.196135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.196147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.196361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.196374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.196605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.196617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.196800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.196812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.196981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.197251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.197264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.197464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.197475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.197641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.197652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.197780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.197792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.197990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.198002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.198206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.198218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.198445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.198458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.198689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.198700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.198822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.198834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.199107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.199118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.199307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.199320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.199491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.199504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.199663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.199674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.199928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.199940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.200970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.200982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.201187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.201199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.201388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.201400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.201568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.201580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.201811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.201823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.201941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.201953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.202185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.202197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.202462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.202475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.202725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.202737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.202972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.202984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.203155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.203377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.203390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.677 [2024-07-15 19:40:34.203497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.677 [2024-07-15 19:40:34.203509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.677 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.203660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.203671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.203839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.203851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.204005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.204017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.204204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.204215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.204459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.204471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.204654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.204666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.204942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.204954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.205075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.205087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.205337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.205349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.205522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.205534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.205700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.205712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.205963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.205974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.206139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.206150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.206407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.206419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.206549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.206561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.206656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.206668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.206856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.206867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.207064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.207261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.207457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.207571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.207829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.207995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.208007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.208191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.208204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.208380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.208393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.208623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.208635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.208755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.208767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.208998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.209180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.209287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.209505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.209777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.209949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.209960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.210166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.210177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.210442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.678 qpair failed and we were unable to recover it. 00:34:23.678 [2024-07-15 19:40:34.210732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.678 [2024-07-15 19:40:34.210744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.210868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.210879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.211065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.211077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.211268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.211280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.211445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.211457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.211647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.211659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.211826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.211838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.212072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.212084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.212260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.212273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.212534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.212545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.212755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.212766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.212951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.212962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.213147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.213158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.213413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.213425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.213592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.213604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.213788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.213800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.214041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.214052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.214234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.214247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.214453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.214464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.214665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.214677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.214851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.214862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.215042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.215054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.215246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.215259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.215438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.215450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.215677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.215689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.215931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.215943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.216067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.216078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.216339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.216351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.216594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.216608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.216772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.216784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.217011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.217022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.217230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.217242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.217482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.217494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.217702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.217714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.217925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.217936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.218164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.218176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.218432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.679 [2024-07-15 19:40:34.218444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.679 qpair failed and we were unable to recover it. 00:34:23.679 [2024-07-15 19:40:34.218704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.218717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.218989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.219001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.219232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.219244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.219485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.219497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.219675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.219686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.219857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.220104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.220116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.220349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.220361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.220631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.220642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.220811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.221090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.221102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.221334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.221347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.221599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.221611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.221792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.221804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.222036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.222048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.222276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.222289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.222456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.222468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.222718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.222898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.222910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.223143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.223155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.223332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.223345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.223582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.223594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.223764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.223775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.223957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.223969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.224136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.224148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.224315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.224326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.224583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.224595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.224828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.224839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.225031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.225218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.225334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.225463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.680 [2024-07-15 19:40:34.225751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.680 qpair failed and we were unable to recover it. 00:34:23.680 [2024-07-15 19:40:34.225873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.225884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.226077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.226089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.226334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.226347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.226524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.226535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.226711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.226722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.226998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.227205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.227410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.227606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.227728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.227967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.227979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.228175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.228187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.228382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.228394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.228628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.228640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.228809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.228821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.229076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.229088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.229272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.229284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.229467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.229479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.229691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.229703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.229879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.229891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.230098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.230110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.230388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.230401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.230564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.230576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.230696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.230708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.230958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.230970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.231232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.231245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.231410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.231422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.231620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.231632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.231901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.231914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.232171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.232183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.232295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.232308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.232427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.232439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.232694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.232706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.232958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.232969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.233212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.233227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.233479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.233490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.681 qpair failed and we were unable to recover it. 00:34:23.681 [2024-07-15 19:40:34.233671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.681 [2024-07-15 19:40:34.233682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.233989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.234190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.234203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.234450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.234462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.234712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.234724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.234925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.234938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.235107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.235118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.235374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.235387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.235570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.235581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.235747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.235759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.236036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.236048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.236215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.236232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.236399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.236411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.236698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.236709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.236913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.236925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.237103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.237115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.237360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.237372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.237548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.237560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.237791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.237803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.238036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.238048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.238291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.238303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.238536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.238548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.238825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.238836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.239079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.239091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.239326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.239339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.239571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.239584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.239863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.239875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.240042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.240054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.240232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.240244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.240502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.240515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.240704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.240715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.241014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.241025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.241208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.241220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.241418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.241430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.241680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.241691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.241872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.241883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.682 qpair failed and we were unable to recover it. 00:34:23.682 [2024-07-15 19:40:34.242122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.682 [2024-07-15 19:40:34.242134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.242324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.242337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.242544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.242556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.242806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.242819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.243019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.243031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.243151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.243162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.243393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.243408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.243584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.243596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.243777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.243789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.244046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.244237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.244249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.244420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.244432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.244620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.244631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.244831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.244843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.245076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.245088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.245347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.245359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.245564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.245575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.245855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.245866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.246122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.246133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.246365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.246377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.246609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.246621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.246912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.246923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.247173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.247349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.247361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.247621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.247632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.247838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.248126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.248138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.248404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.248416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.248671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.248682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.248915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.248927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.249045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.249056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.249315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.249327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.249431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.249442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.249677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.249689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.249921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.249934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.250121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.250134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.250364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.250377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.250611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.250623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.250853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.683 [2024-07-15 19:40:34.250865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.683 qpair failed and we were unable to recover it. 00:34:23.683 [2024-07-15 19:40:34.251060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.251202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.251488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.251720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.251847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.251977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.251989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.252976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.252988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.253149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.253160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.253330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.253342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.253542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.253555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.253732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.253743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.253940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.253952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.254192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.254204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.254338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.254351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.254542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.254555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.254771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.254783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.255021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.255033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.255236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.255249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.255372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.255383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.255572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.255584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.255825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.255837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.256094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.256107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.256382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.256394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.256577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.256589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.256821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.256833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.684 qpair failed and we were unable to recover it. 00:34:23.684 [2024-07-15 19:40:34.257017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.684 [2024-07-15 19:40:34.257028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.257159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.257171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.257346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.257359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.257590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.257602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.257840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.257851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.258038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.258050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.258172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.258183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.258425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.258437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.258566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.258577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.258833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.258844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.259100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.259283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.259478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.259615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.259823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.259991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.260003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.260275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.260287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.260546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.260559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.260818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.260830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.260948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.260960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.261223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.261414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.261427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.261632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.261643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.261829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.261841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.262094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.262105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.262400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.262412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.262616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.262628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.262860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.262871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.263963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.263974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.264139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.264151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.264418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.264430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.264535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.264547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.264729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.685 [2024-07-15 19:40:34.264740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.685 qpair failed and we were unable to recover it. 00:34:23.685 [2024-07-15 19:40:34.265015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.265027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.265259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.265271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.265503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.265514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.265700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.265712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.265827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.265839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.266035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.266046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.266221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.266238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.266404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.266416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.266700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.266712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.266890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.266902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.267159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.267170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.267380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.267392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.267576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.267588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.267843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.267854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.268057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.268069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.268323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.268335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.268562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.268574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.268748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.268759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.268996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.269190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.269407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.269650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.269852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.269974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.269986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.270169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.270181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.270442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.270454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.270687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.270698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.270867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.270879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.271084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.271096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.271272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.271283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.271467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.271737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.271748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.271938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.271950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.272127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.272139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.272406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.272418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.272531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.686 [2024-07-15 19:40:34.272542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.686 qpair failed and we were unable to recover it. 00:34:23.686 [2024-07-15 19:40:34.272775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.272787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.272978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.272990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.273191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.273202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.273419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.273431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.273661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.273672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.273876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.273887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.274056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.274273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.274285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.274455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.274467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.274713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.274725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.274895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.274906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.275080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.275092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.275274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.275286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.275457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.275468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.275731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.275743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.275927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.275939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.276115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.276126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.276292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.276303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.276516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.276527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.276734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.276745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.276942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.276954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.277210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.277221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.277506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.277517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.277779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.277792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.277895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.277908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.278109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.278120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.278304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.278317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.278422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.278433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.278597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.278608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.278807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.278818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.279083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.279095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.279377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.279389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.279554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.279566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.279746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.279757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.279890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.279901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.280157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.280168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.280296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.280308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.280563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.280574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.280819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.280831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.687 qpair failed and we were unable to recover it. 00:34:23.687 [2024-07-15 19:40:34.281039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.687 [2024-07-15 19:40:34.281051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.281304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.281315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.281432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.281444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.281678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.281872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.281885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.282144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.282332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.282512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.282524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.282731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.282742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.283028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.283039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.283306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.283318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.283504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.283515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.283770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.283782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.283952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.284136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.284148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.284425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.284437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.284606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.284617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.284728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.284739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.284951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.285183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.285195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.285361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.285374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.285604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.285616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.285831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.285843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.286139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.286150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.286405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.286418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.286648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.286659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.286892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.286904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.287031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.287043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.287212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.287228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.287459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.287471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.287654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.287665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.287857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.287868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.288044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.288056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.288267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.288278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.288517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.288529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.288783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.288795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.288962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.288974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.289176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.289188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.289421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.688 [2024-07-15 19:40:34.289433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.688 qpair failed and we were unable to recover it. 00:34:23.688 [2024-07-15 19:40:34.289636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.289647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.289828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.289840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.289953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.289964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.290220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.290241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.290494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.290505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.290736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.290748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.290911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.290923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.291173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.291184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.291364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.291376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.291625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.291636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.291818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.291830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.292082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.292094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.292346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.292358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.292582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.292594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.292695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.292706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.292824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.292835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.293091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.293102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.293381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.293392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.293652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.293663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.293921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.293933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.294102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.294114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.294368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.294380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.294508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.294519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.294771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.294782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.294947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.294958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.295148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.295162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.295416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.295428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.295690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.295701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.295864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.295875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.296132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.296144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.296400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.296412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.296595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.689 [2024-07-15 19:40:34.296606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.689 qpair failed and we were unable to recover it. 00:34:23.689 [2024-07-15 19:40:34.296835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.296846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.297026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.297038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.297206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.297218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.297414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.297425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.297627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.297639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.297827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.297838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.298014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.298025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.298260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.298271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.298454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.298466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.298570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.298582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.298838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.298850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.299013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.299025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.299211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.299223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.299490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.299502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.299687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.299699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.299976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.299988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.300179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.300190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.300433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.300445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.300641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.300652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.300818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.300829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.301013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.301027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.301258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.301269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.301451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.301463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.301631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.301643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.301909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.301920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.302096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.302108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.302222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.302242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.302478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.302490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.302724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.302735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.303002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.303014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.303198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.303209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.303396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.303408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.303587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.303598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.303831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.303842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.304100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.690 [2024-07-15 19:40:34.304111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.690 qpair failed and we were unable to recover it. 00:34:23.690 [2024-07-15 19:40:34.304367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.304380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.304506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.304518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.304633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.304644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.304909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.304921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.305021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.305032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.305290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.305302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.305510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.305523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.305704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.305716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.305907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.305919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.306098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.306110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.306378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.306391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.306626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.306638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.306818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.306831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.307059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.307071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.307302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.307314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.307480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.307493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.307766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.307778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.308028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.308040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.308270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.308282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.308471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.308482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.308713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.308724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.308906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.308918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.309084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.309096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.309280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.309292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.309477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.309489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.309722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.309735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.310904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.310915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.311101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.311113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.311232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.311244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.311488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.311500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.311675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.311686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.691 qpair failed and we were unable to recover it. 00:34:23.691 [2024-07-15 19:40:34.311931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.691 [2024-07-15 19:40:34.311942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.312171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.312362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.312374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.312609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.312621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.312873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.312885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.313151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.313163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.313418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.313430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.313714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.313725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.313923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.313934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.314103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.314115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.314310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.314321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.314491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.314502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.314734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.314745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.315969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.315980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.316203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.316214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.316383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.316395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.316665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.316676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.316928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.316939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.317172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.317183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.317414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.317426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.317717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.317729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.317898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.317910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.318180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.318192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.318446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.318459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.318698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.318712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.318964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.318977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.319183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.319194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.319372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.319383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.319579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.319590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.319791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.319803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.319971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.320144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.320155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.320280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.320292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.692 qpair failed and we were unable to recover it. 00:34:23.692 [2024-07-15 19:40:34.320538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.692 [2024-07-15 19:40:34.320550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.320778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.320790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.320987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.320999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.321256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.321268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.321502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.321513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.321719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.321732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.321912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.321923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.322057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.322308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.322487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.322615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.322792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.322990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.323001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.323196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.323208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.323410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.323422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.323590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.323601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.323798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.323810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.324045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.324057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.324286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.324298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.324485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.324496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.324753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.324764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.324868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.324879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.325139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.325150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.325409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.325421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.325536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.325548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.325800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.325811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.325986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.325998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.326206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.326218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.326391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.326403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.326649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.326660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.326941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.326953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.327220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.327237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.327490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.327502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.327739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.327749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.693 [2024-07-15 19:40:34.328008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.693 [2024-07-15 19:40:34.328019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.693 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.328275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.328287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.328494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.328506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.328706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.328718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.328997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.329009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.329177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.329190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.329490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.329502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.329733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.329744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.329860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.329872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.330147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.330159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.330391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.330403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.330572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.330584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.330819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.330831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.330962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.330974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.331206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.331218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.331451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.331462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.331648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.331659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.331789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.331800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.331966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.331977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.332232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.332243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.332474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.332486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.332748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.332759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.333016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.333028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.333142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.333154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.333328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.333340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.333586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.333597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.333828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.333839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.334008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.334019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.334200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.334517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.334528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.334788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.334799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.335045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.694 [2024-07-15 19:40:34.335057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.694 qpair failed and we were unable to recover it. 00:34:23.694 [2024-07-15 19:40:34.335305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.335316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.335444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.335455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.335640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.335651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.335817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.335830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.336035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.336046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.336280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.336294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.336496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.336508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.336740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.336932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.337196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.337208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.337471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.337612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.337623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.337802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.337814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.337936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.337947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.338126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.338138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.338340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.338351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.338605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.338616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.338885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.338896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.339123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.339135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.339401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.339413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.339670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.339681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.339917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.339929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.340179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.340190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.340454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.340466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.340634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.340646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.340819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.340831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.341043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.341054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.341284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.341296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.341527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.341538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.341714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.341726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.341893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.341905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.342022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.342034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.342297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.342310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.342485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.342497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.342757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.342768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.342999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.343010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.343212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.343223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.343487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.343499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.343695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.343706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.343884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.695 [2024-07-15 19:40:34.343896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.695 qpair failed and we were unable to recover it. 00:34:23.695 [2024-07-15 19:40:34.344126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.344137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.344365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.344377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.344489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.344501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.344665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.344676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.344894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.344905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.345083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.345096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.345273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.345285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.345514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.345704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.345715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.345890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.345901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.346811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.346822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.347078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.347089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.347271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.347282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.347533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.347545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.347806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.348046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.348058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.348314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.348327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.348532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.348544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.348728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.348739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.348977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.348989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.349194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.349205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.349460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.349472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.349651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.349662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.349840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.349851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.350051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.350253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.350431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.350544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.350757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.350991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.351002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.351258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.351270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.351438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.351450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.351631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.351642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.351831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.696 [2024-07-15 19:40:34.351842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.696 qpair failed and we were unable to recover it. 00:34:23.696 [2024-07-15 19:40:34.352006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.352017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.352197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.352209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.352443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.352455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.352696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.352707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.352936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.352947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.353200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.353397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.353411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.353587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.353599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.353829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.353840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.354083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.354095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.354212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.354223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.354426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.354437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.354686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.354698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.354891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.354902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.355153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.355165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.355374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.355386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.355663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.355675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.355855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.355867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.356061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.356073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.356175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.356186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.356313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.356325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.356556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.356567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.356827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.356838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.357000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.357012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.357242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.357253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.357492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.357503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.357743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.357754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.357939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.357951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.358127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.358139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.358344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.358357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.358581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.358592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.358756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.358768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.359015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.359027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.359283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.359295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.359471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.359483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.359698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.359710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.359824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.359836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.697 [2024-07-15 19:40:34.360003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.697 [2024-07-15 19:40:34.360015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.697 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.360114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.360126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.360320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.360332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.360508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.360520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.360695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.360706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.360937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.360948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.361203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.361215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.361451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.361463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.361643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.361655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.361909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.361922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.362089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.362102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.362322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.362335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.362540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.362552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.362722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.362734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.362900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.362911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.363078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.363089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.363270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.363282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.363479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.363491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.363686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.363698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.363933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.363944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.364197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.364209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.364453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.364465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.364705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.364716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.364918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.364930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.365161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.365173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.365347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.365359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.365589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.365601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.365778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.365789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.366050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.366061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.366319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.366331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.366575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.366586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.366833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.698 [2024-07-15 19:40:34.366844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.698 qpair failed and we were unable to recover it. 00:34:23.698 [2024-07-15 19:40:34.367104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.367115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.367299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.367311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.367569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.367580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.367749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.367761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.367956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.368096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.368108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.368219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.368236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.368491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.368503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.368690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.368701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.368878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.368889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.369055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.369067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.369316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.369328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.369455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.369468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.369660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.369671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.369846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.369858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.370048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.370060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.370316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.370329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.370506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.370520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.370775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.370786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.371047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.371059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.371310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.371322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.371513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.371525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.371649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.371660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.371941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.371952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.372119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.372129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.372385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.372397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.372571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.372583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.372763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.372775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.372944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.372956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.373185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.373197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.373406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.373418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.373604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.373615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.373871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.373883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.374142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.374153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.374395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.374626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.374854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.374865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.375049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.375060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.375316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.375327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.375453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.375464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.375715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.699 [2024-07-15 19:40:34.375726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.699 qpair failed and we were unable to recover it. 00:34:23.699 [2024-07-15 19:40:34.375913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.375925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.376128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.376139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.376390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.376402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.376681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.376693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.376925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.376936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.377190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.377201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.377380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.377392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.377576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.377588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.377753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.377765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.378000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.378012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.378184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.378195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.378384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.378396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.378664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.378677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.378928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.378940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.379122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.379134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.379329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.379340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.379445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.379458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.379687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.379699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.379869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.379881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.380108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.380119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.380352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.380364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.380538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.380550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.380757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.380768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.381015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.381026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.381322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.381334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.381538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.381549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.381810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.381822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.382110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.382121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.382325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.382337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.382515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.382526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.382706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.382718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.382971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.382983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.383105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.383117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.383366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.383378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.383547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.383558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.383730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.383741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.700 qpair failed and we were unable to recover it. 00:34:23.700 [2024-07-15 19:40:34.383970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.700 [2024-07-15 19:40:34.383982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.384097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.384275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.384286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.384539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.384551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.384757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.384769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.384934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.384946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.385151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.385162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.385343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.385355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.385484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.385496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.385726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.385737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.385928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.385939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.386134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.386146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.386388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.386400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.386658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.386671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.386841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.386853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.387863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.387876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.388899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.388910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.389116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.389379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.389391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.389513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.389525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.389782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.389795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.389911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.389923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.390173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.390184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.390443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.390455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.390664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.390675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.390881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.390893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.391175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.701 [2024-07-15 19:40:34.391187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.701 qpair failed and we were unable to recover it. 00:34:23.701 [2024-07-15 19:40:34.391384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.391396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.391594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.391606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.391791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.391802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.391974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.391987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.392169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.392180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.392423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.392434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.392553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.392565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.392744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.392755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.392925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.392937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.393179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.393191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.393461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.393472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.393715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.393726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.394028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.394039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.394295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.394307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.394540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.394551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.394720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.394732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.394916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.394927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.395114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.395242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.395507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.395707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.395819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.395999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.396010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.396203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.396216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.396404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.396416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.396589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.396600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.396835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.396846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.397077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.397089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.397381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.397393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.397623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.397634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.397735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.397747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.397931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.397943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.398192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.398203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.398370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.398382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.398635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.702 [2024-07-15 19:40:34.398648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.702 qpair failed and we were unable to recover it. 00:34:23.702 [2024-07-15 19:40:34.398903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.398915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.399106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.399118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.399305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.399317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.399565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.399577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.399836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.399847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.400104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.400116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.400284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.400295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.400549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.400560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.400723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.400735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.400860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.400872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.401102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.401113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.401351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.401363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.401538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.401549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.401750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.401761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.404390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.404402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.404657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.404668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.404896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.404907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.405038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.405050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.405300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.405311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.405509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.405520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.405772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.405783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.405978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.405990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.406219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.406234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.406410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.406422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.406678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.406689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.406943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.406954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.407185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.407196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.407323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.407334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.407588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.407602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.407833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.407844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.408025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.408036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.408156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.408167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.408330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.408342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.408572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.408584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.408843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.408854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.703 qpair failed and we were unable to recover it. 00:34:23.703 [2024-07-15 19:40:34.409082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.703 [2024-07-15 19:40:34.409093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.409323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.409334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.409518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.409698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.409710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.409920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.409932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.410186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.410197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.410362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.410374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.410559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.410571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.410801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.411012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.411023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.411273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.411466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.411477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.411598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.411609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.411843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.411854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.412939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.412951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.413127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.413141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.413308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.413320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.413446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.413458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.413649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.413660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.413842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.413853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.414023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.414035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.414265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.414277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.414440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.414453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.414704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.414716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.414909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.414920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.415127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.415138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.415397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.415409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.415578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.415589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.415843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.415855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.416032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.416043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.416220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.416236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.416402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.416414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.416601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.704 [2024-07-15 19:40:34.416613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.704 qpair failed and we were unable to recover it. 00:34:23.704 [2024-07-15 19:40:34.416804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.416816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.417975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.417987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.418205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.418216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.418490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.418503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.418737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.418749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.418989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.419002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.419236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.419248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.419480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.419492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.419660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.419672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.419942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.419954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.420118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.420129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.420296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.420308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.420495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.420507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.420624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.420636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.420869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.420881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.421136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.421147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.421327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.421340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.421593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.421608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.421864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.421876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.422132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.422144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.422383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.422395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.422580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.422591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.422788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.422800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.423043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.423055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.423233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.423245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.423506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.423517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.423706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.423718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.423940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.423951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.424119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.424131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.424384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.424396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.424516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.424527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.424759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.424771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.705 [2024-07-15 19:40:34.424935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.705 [2024-07-15 19:40:34.424947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.705 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.425180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.425191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.425357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.425369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.425546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.425557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.425815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.425826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.426089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.426101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.426354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.426366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.426491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.426503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.426693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.426704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.426912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.426924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.427132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.427143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.427258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.427270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.427439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.427451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.427701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.427713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.427944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.427957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.428148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.428160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.428418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.428431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.428708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.428720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.428819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.428831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.429075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.429086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.429198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.429209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.429332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.429344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.429575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.429586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.429827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.429838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.430074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.430085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.430346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.430360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.430591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.430603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.430860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.430873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.431041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.431052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.431174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.431186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.431428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.431439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.431646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.431657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.431910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.431921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.432957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.706 [2024-07-15 19:40:34.432970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.706 qpair failed and we were unable to recover it. 00:34:23.706 [2024-07-15 19:40:34.433092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.433104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.433281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.433421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.433433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.433692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.433704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.433839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.433850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.434034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.434045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.434316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.434328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.434577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.434589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.434786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.434798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.435052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.435063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.435332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.435344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.435569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.435581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.435756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.435768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.436009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.436020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.436278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.436290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.436544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.436556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.436722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.436735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.436864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.436875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.437042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.437053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.437312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.437324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.437512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.437523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.437697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.437709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.437877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.437889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.438123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.438135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.438356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.438368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.438637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.438649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.438775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.438788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.439001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.439012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.439286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.439298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.439531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.439543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.439799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.439811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.440040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.440052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.440313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.440325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.440495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.440506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.440767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.707 [2024-07-15 19:40:34.440778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.707 qpair failed and we were unable to recover it. 00:34:23.707 [2024-07-15 19:40:34.441006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.441018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.441321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.441333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.441565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.441577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.441807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.441818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.442085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.442096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.442295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.442307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.442485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.442497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.442601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.442613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.442795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.442807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.443931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.443942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.444201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.444212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.444470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.444482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.444720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.444731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.444966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.444977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.445243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.445256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.445424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.445436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.445615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.445626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.445802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.445813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.445944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.445955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.446215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.446236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.446489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.446501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.446736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.446747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.447004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.447015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.447182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.447194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.447328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.447340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.447501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.447512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.447743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.447756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.448037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.448049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.448220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.448234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.448405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.448417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.448595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.448607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.448849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.448861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.449119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.449131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.708 [2024-07-15 19:40:34.449362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.708 [2024-07-15 19:40:34.449374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.708 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.449631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.449642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.449872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.449884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.450087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.450098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.450352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.450365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.450465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.450755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.450767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.450949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.450961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.451213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.451230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.451534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.451545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.451728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.451740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.451918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.451930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.452160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.452172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.452348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.452360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.452612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.452624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.452847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.452859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.453044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.453056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.453164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.453176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.453292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.453304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.453544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.453555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.453751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.453762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.454008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.454019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.454234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.454246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.454415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.454426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.454630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.454642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.454771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.454783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.455038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.455050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.455228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.455239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.455400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.455412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.455661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.455673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.455856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.455869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.456112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.456300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.456432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.456694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.456818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.456989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.709 [2024-07-15 19:40:34.457002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.709 qpair failed and we were unable to recover it. 00:34:23.709 [2024-07-15 19:40:34.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.457277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.457527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.457539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.457776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.457787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.458044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.458055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.458235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.458247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.458434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.458445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.458639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.458650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.458848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.458859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.459095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.459106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.459336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.459349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.459567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.459579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.459756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.459768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.459946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.459957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.460075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.460087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.460295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.460307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.460535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.460547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.460713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.460725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.460942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.460954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.461219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.461234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.461466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.461478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.461661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.461673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.461772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.461783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.462014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.462025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.462307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.462320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.462515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.462527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.462779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.462790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.463045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.463056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.463313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.463325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.463565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.463577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.463751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.463762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.463993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.464004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.464180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.464191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.464413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.464426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.464681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.464693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.464873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.464886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.465144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.465156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.465323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.465337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.710 [2024-07-15 19:40:34.465454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.710 [2024-07-15 19:40:34.465465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.710 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.465697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.465709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.465872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.465883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.466114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.466125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.466300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.466313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.466585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.466597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.466838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.466849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.467120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.467132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.467387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.467399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.467603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.467615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.467814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.467825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.467992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.468003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.468188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.468200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.468473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.468486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.468673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.468685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.468872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.468883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.469139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.469151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.469274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.469286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.469483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.469495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.469738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.469749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.469955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.469966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.470198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.470209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.470415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.470427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.470610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.470622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.470720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.470732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.470827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.470838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.471014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.471026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.471281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.471293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.471469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.471481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.471647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.471658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.471825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.471837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.472006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.472018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.472291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.472303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.472481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.472493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.472730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.472741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.472930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.472941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.711 [2024-07-15 19:40:34.473201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.711 [2024-07-15 19:40:34.473212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.711 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.473389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.473401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.473654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.473665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.473894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.473908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.474167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.474178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.474382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.474509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.474520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.474782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.474793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.475045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.475057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.475170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.475183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.475359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.475371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.475601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.475613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.475840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.475852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.476080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.476091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.476377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.476389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.476565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.476576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.476874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.476886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.477123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.477135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.477417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.477429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.477683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.477695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.477879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.477891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.478061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.478072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.478311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.478323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.478525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.478537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.478765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.478777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.478943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.478955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.479155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.479167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.479431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.479444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.479719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.479730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.479962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.479974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.480241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.480254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.480381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.480393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.480571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.480582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.712 qpair failed and we were unable to recover it. 00:34:23.712 [2024-07-15 19:40:34.480696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.712 [2024-07-15 19:40:34.480708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.480966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.480978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.481145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.481156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.481391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.481653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.481665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.481894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.481906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.482163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.482174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.482428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.482440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.482629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.482642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.482874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.482885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.483145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.483159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.483349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.483361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.483527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.483539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.483657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.483669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.483927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.483938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.484192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.484204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.484405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.484417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.484620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.484632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.484886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.484898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.485106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.485117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.485405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.485418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.485677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.485688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.485949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.485961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.486242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.486254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.486381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.486393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.486495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.486506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.486691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.486703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.486877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.486888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.487120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.487132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.487363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.487375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.487550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.487562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.487675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.487687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.487940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.487951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.488124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.488136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.488373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.488384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.488501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.488513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.488695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.488706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.488906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.488943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.489164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.489181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.489468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.489485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.489737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.489753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.713 qpair failed and we were unable to recover it. 00:34:23.713 [2024-07-15 19:40:34.489927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.713 [2024-07-15 19:40:34.489943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.490131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.490147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.490271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.490286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.490534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.490549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.490808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.490824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.491116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.491132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.491310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.491326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.491587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.491603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.491867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.491883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.492066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.492086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.492281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.492296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.492487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.492503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.492634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.492649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.492913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.492928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.493051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.493067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.493198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.493213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.493430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.493446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.493711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.493727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.493902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.493918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.494158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.494173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.494355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.494371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.494691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.494707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.494878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.494893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.495135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.495151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.495353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.495368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.495561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.495576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.495769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.495784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.495919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.495935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.496199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.496214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.496479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.496495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.496602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.496617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.496790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.496805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.496991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.497007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.497266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.497281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.497460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.497476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.497682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.497697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.497910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.497924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.498127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.498138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.498265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.498276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.714 [2024-07-15 19:40:34.498456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.714 [2024-07-15 19:40:34.498468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.714 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.498654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.498666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.498920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.498931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.499114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.499126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.499387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.499399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.499630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.499643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.499815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.499826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.499946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.499959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.500074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.500085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.500281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.500293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.500474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.500488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.500592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.500604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.500851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.500862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.501051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.501063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.501278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.501290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.501472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.501715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.501727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.501957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.501969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.502082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.502094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.502365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.502377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.502616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.502627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.502803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.502814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.502993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.503005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.503259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.503271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.503530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.503542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.503719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.503730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.503996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.504007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.504263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.504275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.504480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.504491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.504736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.504747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.504857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.504868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.505057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.505068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.505325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.505337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.505586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.505598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.505715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.715 [2024-07-15 19:40:34.505727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.715 qpair failed and we were unable to recover it. 00:34:23.715 [2024-07-15 19:40:34.505895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.505906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.506088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.506099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.506391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.506408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.506613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.506628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.506838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.506853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.506985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.507000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.507269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.507284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.507471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.507486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.507724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.507739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.507999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.508013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.508184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.508199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.716 qpair failed and we were unable to recover it. 00:34:23.716 [2024-07-15 19:40:34.508406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.716 [2024-07-15 19:40:34.508422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.508661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.508677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.508898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.508915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.509131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.509146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.509384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.509402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.509644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.509659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.509899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.509914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.510032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.510047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.510285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.510301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.510511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.510526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.510767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.510782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.511051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.511066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.511265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.511280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.511557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.511574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.511768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.511784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.512024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.512039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.512219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.512238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.512479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.512495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.512776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.997 [2024-07-15 19:40:34.512791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.997 qpair failed and we were unable to recover it. 00:34:23.997 [2024-07-15 19:40:34.513053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.513068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.513277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.513293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.513490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.513505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.513629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.513644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.513815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.513831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.514093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.514108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.514293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.514309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.514495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.514511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.514698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.514713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.514952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.514968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.515171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.515186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.515482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.515498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.515769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.515784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.516039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.516050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.516233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.516246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.516554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.516566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.516794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.516805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.516972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.516984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.517216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.517235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.517416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.517428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.517680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.517691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.517946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.517958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.518124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.518135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.518388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.518400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.518582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.518594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.518775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.518789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.519050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.519063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.519251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.519263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.519515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.519526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.519689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.519701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.519892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.519903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.520135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.520147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.520328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.520340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.520468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.520480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.520607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.520619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.520900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.520912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.521029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.521041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.998 [2024-07-15 19:40:34.521157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.998 [2024-07-15 19:40:34.521168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.998 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.521372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.521385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.521549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.521561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.521675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.521686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.521941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.521953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.522212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.522228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.522410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.522423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.522632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.522644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.522882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.522894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.523002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.523014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.523189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.523200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.523451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.523463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.523626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.523638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.523761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.523773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.524028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.524039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.524213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.524229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.524466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.524477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.524681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.524692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.524951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.524963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.525196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.525465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.525477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.525747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.525758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.525924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.525935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.526132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.526144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.526409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.526421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.526584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.526595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.526730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.526902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.526914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.527013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.527025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.527200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.527212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.527473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.527486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.527739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.527751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.527987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.527998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.528196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.528207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.528444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.999 [2024-07-15 19:40:34.528456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:23.999 qpair failed and we were unable to recover it. 00:34:23.999 [2024-07-15 19:40:34.528620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.528631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.528943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.528954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.529187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.529198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.529451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.529464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.529694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.529705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.529936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.529948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.530130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.530141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.530375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.530388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.530663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.530675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.530854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.530866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.531161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.531173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.531408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.531420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.531586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.531598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.531853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.531865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.532049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.532060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.532235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.532248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.532430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.532442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.532695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.532706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.532898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.532911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.533091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.533102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.533360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.533374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.533631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.533642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.533759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.533771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.534027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.534038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.534268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.534280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.534464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.534475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.534688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.534699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.534882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.534894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.535143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.535154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.535353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.535365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.535621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.535633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.535912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.535924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.536183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.536195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.536394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.536407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.536653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.536665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.536918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.537201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.000 [2024-07-15 19:40:34.537437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.000 [2024-07-15 19:40:34.537449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.000 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.537628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.537640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.537836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.537848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.538101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.538113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.538282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.538294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.538492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.538504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.538735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.538976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.539190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.539202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.539431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.539695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.539707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.539991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.540210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.540395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.540517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.540720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.540911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.540922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.541089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.541101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.541201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.541213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.541492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.541509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.541693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.541708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.541877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.541892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.542134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.542149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.542330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.542349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.542605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.542828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.542843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.543132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.543147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.543435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.543451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.543663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.543678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.543943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.543959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.544164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.544181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.544388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.544404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.544625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.544642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.544849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.544864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.545052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.545068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.545178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.545194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.001 qpair failed and we were unable to recover it. 00:34:24.001 [2024-07-15 19:40:34.545407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.001 [2024-07-15 19:40:34.545423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.545670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.545686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.545888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.545905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.546114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.546130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.546316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.546332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.546523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.546539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.546727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.546743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.547009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.547024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.547308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.547324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.547591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.547606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.547892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.547908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.548036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.548052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.548294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.548310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.548510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.548525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.548724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.548740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.549021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.549037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.549310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.549326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.549556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.549572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.549815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.549831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.549952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.549969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.550253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.550270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.550551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.550567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.550698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.550713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.550977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.550993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.551194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.551209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.551430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.551445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.551684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.551699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.551963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.551981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.552236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.002 [2024-07-15 19:40:34.552252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.002 qpair failed and we were unable to recover it. 00:34:24.002 [2024-07-15 19:40:34.552460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.552475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.552766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.552906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.552922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.553110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.553125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.553363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.553379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.553520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.553537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.553787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.553804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.554088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.554101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.554349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.554361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.554593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.554606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.554840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.554853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.555037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.555049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.555314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.555326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.555630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.555642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.555819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.555831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.556857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.556869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.557115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.557127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.557392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.557405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.557527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.557539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.557725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.557738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.558950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.558963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.559172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.559183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.559363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.559375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.559509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.559521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.559687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.559699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.559878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.559890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.003 [2024-07-15 19:40:34.560131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.003 [2024-07-15 19:40:34.560144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.003 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.560307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.560319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.560489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.560502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.560771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.560783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.560896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.560908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.561140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.561155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.561407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.561419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.561611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.561623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.561855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.561867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.562128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.562140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.562305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.562317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.562429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.562441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.562645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.562798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.562809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.563980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.563992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.564243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.564256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.564500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.564512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.564634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.564646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.564943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.564955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.565124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.565136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.565264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.565276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.565484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.565496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.565672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.565683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.565802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.565814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.566091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.566103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.566301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.566314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.566499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.566510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.566720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.566731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.566916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.566928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.567126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.567137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.004 qpair failed and we were unable to recover it. 00:34:24.004 [2024-07-15 19:40:34.567384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.004 [2024-07-15 19:40:34.567396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.567580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.567591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.567759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.567770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.567936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.567947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.568121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.568133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.568378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.568390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.568571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.568582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.568746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.568759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.568951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.568963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.569197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.569208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.569473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.569485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.569660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.569671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.569901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.569912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.570144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.570155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.570431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.570443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.570699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.570711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.570890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.570902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.571187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.571199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.571385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.571560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.571571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.571755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.571767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.571996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.572008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.572261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.572273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.572404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.572415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.572658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.572669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.572852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.572863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.573024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.573036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.573152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.573164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.573434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.573446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.573630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.573641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.573818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.573829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.574010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.574022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.574186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.574198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.574451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.574462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.574720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.574732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.574910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.574922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.575109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.575120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.005 [2024-07-15 19:40:34.575382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.005 [2024-07-15 19:40:34.575395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.005 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.575578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.575590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.575708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.575719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.575904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.575917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.576166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.576178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.576442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.576453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.576636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.576647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.576835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.576847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.577076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.577088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.577265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.577276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.577415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.577428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.577712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.577724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.577930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.577941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.578125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.578137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.578246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.578258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.578490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.578501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.578706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.578717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.578975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.578987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.579260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.579272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.579528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.579540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.579776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.579787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.579989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.580001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.580238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.580250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.580443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.580455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.580624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.580636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.580895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.580906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.581983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.581994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.582200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.582212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.582462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.582474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.582667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.582679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.582883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.582894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.583070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.006 [2024-07-15 19:40:34.583082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.006 qpair failed and we were unable to recover it. 00:34:24.006 [2024-07-15 19:40:34.583333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.583345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.583461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.583473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.583730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.583741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.583974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.583986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.584242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.584254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.584516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.584527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.584698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.584710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.584947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.584958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.585234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.585246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.585483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.585495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.585700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.585711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.585829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.585840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.586096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.586107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.586363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.586376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.586493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.586504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.586708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.586720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.586955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.586967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.587247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.587259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.587378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.587390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.587568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.587580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.587748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.587760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.587944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.587955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.588130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.588142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.588248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.588260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.588494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.588505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.588702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.588714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.588896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.588907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.589156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.589168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.589420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.589432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.589625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.589637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.589801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.589812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.590008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.590019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.590295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.590428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.590439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.007 [2024-07-15 19:40:34.590621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.007 [2024-07-15 19:40:34.590633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.007 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.590735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.590747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.590928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.590939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.591131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.591142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.591310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.591493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.591504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.591737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.591750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.591914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.591926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.592182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.592194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.592451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.592463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.592698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.592710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.592895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.592906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.593159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.593170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.593385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.593397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.593598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.593610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.593774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.593784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.594067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.594078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.594316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.594327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.594530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.594542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.594773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.594786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.594978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.594989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.595240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.595253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.595503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.595515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.595780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.595792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.596071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.596083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.596248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.596260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.596475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.596487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.596686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.596698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.596978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.596990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.597153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.597165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.597417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.597428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.597629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.597641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.597922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.597933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.598118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.598129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.598421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.598433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.598534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.598546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.598768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.598779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.598891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.598902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.008 [2024-07-15 19:40:34.599161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.008 [2024-07-15 19:40:34.599173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.008 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.599340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.599352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.599536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.599547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.599727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.599739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.599927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.599939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.600170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.600181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.600347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.600359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.600589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.600600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.600785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.600798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.600981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.600992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.601159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.601171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.601378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.601390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.601574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.601585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.601839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.601851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.602023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.602034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.602288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.602299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.602507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.602519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.602761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.602773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.602948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.602959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.603078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.603090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.603253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.603266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.603506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.603520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.603636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.603647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.603829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.603841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.604046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.604058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.604223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.604253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.604485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.604497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.604703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.604715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.604983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.604995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.605233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.605246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.605502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.605513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.605691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.605703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.605886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.605898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.606156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.606168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.606350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.606363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.009 [2024-07-15 19:40:34.606483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.009 [2024-07-15 19:40:34.606494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.009 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.606657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.606669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.606793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.606804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.606978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.606989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.607179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.607190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.607426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.607667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.607679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.607961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.607973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.608208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.608219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.608398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.608640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.608652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.608820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.608832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.608997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.609008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.609303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.609316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.609583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.609595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.609699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.609711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.609827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.609838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.610068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.610079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.610210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.610222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.610482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.610494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.610730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.610742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.610865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.610877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.611130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.611142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.611381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.611665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.611677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.611840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.611852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.612107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.612121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.612239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.612251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.612444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.612456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.612641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.612652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.612897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.612908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.613107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.613119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.613286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.613298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.613465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.613477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.613709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.613721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.613824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.613835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.614111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.614122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.614373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.010 [2024-07-15 19:40:34.614385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.010 qpair failed and we were unable to recover it. 00:34:24.010 [2024-07-15 19:40:34.614515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.614527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.614789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.615035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.615047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.615233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.615245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.615440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.615452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.615697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.615709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.615995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.616007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.616287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.616299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.616502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.616514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.616746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.616757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.617010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.617022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.617139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.617150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.617352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.617364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.617563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.617575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.617795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.617807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.618040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.618052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.618226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.618238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.618437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.618448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.618706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.618717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.618885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.618898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.619064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.619076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.619262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.619274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.619532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.619544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.619774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.619786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.620000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.620011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.620191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.620439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.620451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.620614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.620626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.620857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.620870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.621050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.011 [2024-07-15 19:40:34.621062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.011 qpair failed and we were unable to recover it. 00:34:24.011 [2024-07-15 19:40:34.621170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.621182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.621312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.621324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.621578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.621590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.621899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.621910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.622096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.622108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.622276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.622287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.622544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.622555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.622818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.622830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.623045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.623056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.623223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.623238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.623468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.623479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.623658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.623670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.623928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.623940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.624234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.624246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.624421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.624433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.624694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.624706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.624891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.624903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.625155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.625167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.625364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.625377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.625580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.625592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.625758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.625770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.626027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.626038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.626295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.626307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.626477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.626488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.626740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.626752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.626929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.626941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.627125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.627136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.627402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.627413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.627547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.627559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.627791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.627803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.628000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.628011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.628203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.628214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.628463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.628499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.628704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.628721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.628964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.628979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.629233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.629250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.629489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.629505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.629703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.012 [2024-07-15 19:40:34.629718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.012 qpair failed and we were unable to recover it. 00:34:24.012 [2024-07-15 19:40:34.629971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.629991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.630201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.630216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.630485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.630501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.630763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.630778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.631047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.631062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.631327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.631343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.631479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.631494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.631763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.631778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.632044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.632059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.632281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.632297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.632556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.632571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.632758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.632774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.632964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.632980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.633181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.633197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.633471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.633487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.633748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.633763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.634026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.634041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.634309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.634325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.634592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.634608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.634819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.634835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.635050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.635065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.635330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.635346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.635614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.635630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.635846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.635861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.636125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.636140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.636386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.636403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.636587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.636602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.636725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.636755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.637011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.637027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.637295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.637311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.637551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.637809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.637824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.638090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.638105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.638345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.638362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.638646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.638661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.638852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.013 [2024-07-15 19:40:34.638868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.013 qpair failed and we were unable to recover it. 00:34:24.013 [2024-07-15 19:40:34.639131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.639146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.639402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.639418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.639597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.639612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.639809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.639824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.640059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.640074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.640272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.640288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.640477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.640493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.640730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.640745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.641011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.641027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.641221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.641242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.641511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.641527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.641743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.641759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.641956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.641972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.642185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.642201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.642330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.642346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.642610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.642624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.642837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.642851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.643124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.643394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.643413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.643605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.643620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.643818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.643835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.644075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.644090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.644378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.644394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.644584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.644600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.644864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.644880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.645020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.645036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.645303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.645318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.645556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.645570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.645771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.645786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.646094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.646111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.646374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.646648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.646664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.646797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.646813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.646930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.646946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.647128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.647143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.647431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.647627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.647643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.647823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.647838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.648094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.648110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.648291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.014 [2024-07-15 19:40:34.648308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.014 qpair failed and we were unable to recover it. 00:34:24.014 [2024-07-15 19:40:34.648516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.648531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.648750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.648765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.649025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.649041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.649216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.649452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.649468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.649677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.649696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.649964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.649980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.650230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.650246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.650531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.650547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.650733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.650748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.651011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.651027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.651234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.651250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.651428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.651443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.651709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.651724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.651938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.651952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.652191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.652207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.652404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.652420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.652540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.652555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.652792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.652807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.653103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.653118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.653299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.653315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.653537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.653552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.653811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.654010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.654025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.654306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.654322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.654588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.654603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.654842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.654857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.655143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.655158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.655300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.655315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.655508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.655524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.655725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.655741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.655986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.656001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.656241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.656256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.656479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.656495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.656783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.656798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.657047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.015 [2024-07-15 19:40:34.657063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.015 qpair failed and we were unable to recover it. 00:34:24.015 [2024-07-15 19:40:34.657334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.657541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.657557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.657839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.657855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.658076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.658092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.658339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.658354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.658620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.658635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.658873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.658888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.659094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.659109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.659350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.659366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.659632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.659648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.659918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.659932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.660099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.660112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.660278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.660289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.660562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.660574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.660827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.660839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.661078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.661090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.661290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.661302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.661510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.661522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.661731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.661742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.661907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.661919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.662173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.662185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.662445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.662456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.662707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.662719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.662894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.662909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.663108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.663120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.663328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.663340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.663510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.663522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.663703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.663714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.663880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.663893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.664155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.664166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.664370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.664382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.664506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.016 [2024-07-15 19:40:34.664517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.016 qpair failed and we were unable to recover it. 00:34:24.016 [2024-07-15 19:40:34.664815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.664827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.665057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.665068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.665325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.665593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.665604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.665772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.665783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.665979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.665991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.666163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.666174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.666406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.666418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.666666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.666677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.666907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.666918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.667035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.667046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.667244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.667256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.667421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.667433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.667689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.667701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.667931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.667943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.668069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.668080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.668337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.668349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.668532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.668543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.668728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.668739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.668914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.668926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.669120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.669132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.669302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.669421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.669433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.669548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.669560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.669819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.669830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.670976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.670989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.671188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.671201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.671435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.671447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.671628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.671640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.671811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.671823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.672086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.672098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.017 qpair failed and we were unable to recover it. 00:34:24.017 [2024-07-15 19:40:34.672267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.017 [2024-07-15 19:40:34.672279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.672532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.672545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.672803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.672814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.672991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.673004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.673184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.673196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.673428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.673440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.673669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.673681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.673872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.673883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.674115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.674126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.674390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.674402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.674656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.674668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.674903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.674915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.675098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.675110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.675318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.675330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.675508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.675519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.675649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.675660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.675864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.675876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.676108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.676120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.676310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.676322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.676527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.676538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.676714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.676725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.676973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.676985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.677169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.677181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.677285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.677297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.677503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.677515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.677692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.677703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.677870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.677881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.678047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.678058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.678267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.678279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.678535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.678713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.678725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.678982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.678994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.679176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.679188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.679354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.679366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.679530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.679542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.679724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.679735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.680017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.680029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.680203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.680215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.680397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.680408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.680675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.680687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.680938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.018 [2024-07-15 19:40:34.680950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-07-15 19:40:34.681077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.681089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.681287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.681298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.681497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.681682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.681694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.681949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.681961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.682217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.682232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.682500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.682511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.682690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.682702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.682983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.682994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.683220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.683236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.683418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.683429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.683662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.683673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.683848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.683860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.684092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.684104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.684283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.684295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.684504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.684515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.684680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.684691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.684898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.685095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.685106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.685362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.685373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.685542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.685553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.685766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.685779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.686969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.686981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.687274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.687497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.687508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.687676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.687688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.687851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.687862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.688116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.688387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.688399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.688591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.688603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.688813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.688825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.689083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.689095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.689282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.689294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.689477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.689489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.689764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.689775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.689980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.689991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-07-15 19:40:34.690247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.019 [2024-07-15 19:40:34.690259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.690518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.690530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.690710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.690722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.690889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.690901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.691177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.691189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.691384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.691396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.691632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.691645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.691767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.691780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.692987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.692998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.693181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.693193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.693366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.693378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.693611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.693623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.693834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.693846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.694110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.694121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.694385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.694397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.694640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.694654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.694825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.694837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.695066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.695077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.695246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.695258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.695485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.695496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.695659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.695670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.695903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.695914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.696149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.696160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.696349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.696361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.696527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.696538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.696722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.696733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.696846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.696857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.020 qpair failed and we were unable to recover it. 00:34:24.020 [2024-07-15 19:40:34.697022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.020 [2024-07-15 19:40:34.697034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.697294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.697305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.697418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.697429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.697639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.697650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.697832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.697844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.698023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.698034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.698264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.698275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.698512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.698523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.698778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.698789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.698917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.698929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.699176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.699187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.699369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.699381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.699632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.699643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.699860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.699872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.700146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.700158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.700386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.700398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.700561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.700573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.700796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.700807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.701050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.701061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.701331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.701343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.701508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.701633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.701645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.701812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.701824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.702004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.702016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.702294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.702305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.702506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.702517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.702757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.702769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.703026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.703037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.703200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.703214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.703330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.703342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.703596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.703607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.021 [2024-07-15 19:40:34.703772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.021 [2024-07-15 19:40:34.703784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.021 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.703988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.704000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.704181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.704193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.704359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.704370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.704562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.704574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.704858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.704869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.705046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.705057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.705313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.705325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.705521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.705533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.705718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.705730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.705974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.705986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.706242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.706255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.706437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.706449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.706707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.706718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.706848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.706859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.707097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.707109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.707358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.707371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.707601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.707613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.707776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.707788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.708020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.708031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.708202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.708214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.708418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.708438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.708696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.708711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.708825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.708840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.709018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.709033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.709294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.709310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.709573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.709588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.709838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.710134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.710149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.710388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.710405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.710584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.710600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.710857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.710872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.711004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.711019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.711156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.711171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.711435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.711450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.711636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.711651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.711898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.022 [2024-07-15 19:40:34.711914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.022 qpair failed and we were unable to recover it. 00:34:24.022 [2024-07-15 19:40:34.712103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.712118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.712310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.712325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.712461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.712652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.712667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.712900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.712916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.713156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.713171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.713437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.713454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.713696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.713855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.713871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.714130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.714145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.714333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.714349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.714638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.714653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.714822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.714838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.715029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.715044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.715347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.715382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.715682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.715699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.715941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.715957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.716163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.716179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.716396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.716412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.716603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.716618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.716791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.716806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.716992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.717007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.717248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.717263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.717527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.717543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.717729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.717744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.718019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.718034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.718294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.718310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.718498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.718518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.718783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.718798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.719019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.719035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.719153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.719440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.719456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.719696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.719712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.719885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.719900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.023 [2024-07-15 19:40:34.720163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.023 [2024-07-15 19:40:34.720179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.023 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.720393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.720409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.720626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.720642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.720935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.720949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.721142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.721157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.721398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.721414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.721654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.721669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.721848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.721862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.722102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.722117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.722318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.722333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.722542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.722557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.722764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.722779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.722955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.722970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.723263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.723473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.723488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.723698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.723713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.723896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.723911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.724089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.724105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.724295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.724311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.724510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.724525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.724823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.724840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.725110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.725125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.725391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.725407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.725671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.725686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.725899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.725914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.726154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.726169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.726465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.726481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.726675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.726690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.726899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.726914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.727122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.727138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.727379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.727395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.727694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.727710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.727885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.728072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.728088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.728331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.728347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.728612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.728627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.728888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.728903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.024 [2024-07-15 19:40:34.729094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.024 [2024-07-15 19:40:34.729109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.024 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.729311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.729327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.729593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.729609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.729737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.729752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.729961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.729976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.730147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.730163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.730414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.730430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.730627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.730643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.730913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.730929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.731164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.731180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.731373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.731392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.731518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.731533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.731750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.731765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.732059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.732074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.732322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.732338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.732578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.732594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.732870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.732886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.733061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.733350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.733366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.733657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.733673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.733853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.733869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.734057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.734072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.734315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.734331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.734602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.734618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.734886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.734902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.735109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.735124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.735315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.735332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.735609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.735625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.735892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.735907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.736094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.736110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.736401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.025 [2024-07-15 19:40:34.736416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.025 qpair failed and we were unable to recover it. 00:34:24.025 [2024-07-15 19:40:34.736683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.736698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.736953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.736969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.737209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.737228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.737487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.737503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.737688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.737703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.737974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.737990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.738186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.738204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.738478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.738494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.738737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.738752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.738972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.738987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.739161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.739176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.739402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.739417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.739589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.739605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.739795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.739810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.740080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.740096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.740336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.740352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.740603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.740619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.740877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.740893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.741159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.741174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.741367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.741383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.741624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.741640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.741827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.741843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.742036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.742051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.742313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.742329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.742592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.742607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.742844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.742860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.743048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.743064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.743253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.743268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.743460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.743474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.743744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.743762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.744050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.744066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.744319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.744335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.744525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.744541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.744680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.744697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.744903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.744918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.745131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.745146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.745339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.745355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.026 qpair failed and we were unable to recover it. 00:34:24.026 [2024-07-15 19:40:34.745645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.026 [2024-07-15 19:40:34.745661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.745849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.745865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.746105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.746122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.746361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.746377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.746646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.746662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.746835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.746850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.747061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.747077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.747220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.747241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.747438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.747630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.747646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.747863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.747886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.748063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.748079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.748296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.748312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.748521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.748536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.748723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.748738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.748926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.749184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.749199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.749392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.749408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.749618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.749633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.749872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.749887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.750070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.750085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.750348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.750363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.750658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.750673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.750921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.750939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.751201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.751216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.751510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.751526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.751700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.751716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.751986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.752001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.752172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.752187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.752425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.752441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.752704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.752719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.752903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.752920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.753106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.753121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.753234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.753250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.753373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.753388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.753652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.753667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.753858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.753873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.754066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.754082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.754322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.027 [2024-07-15 19:40:34.754337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.027 qpair failed and we were unable to recover it. 00:34:24.027 [2024-07-15 19:40:34.754529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.754545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.754738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.754753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.755013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.755028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.755316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.755331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.755523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.755538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.755805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.755820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.756011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.756026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.756237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.756253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.756491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.756695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.756710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.756975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.756991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.757170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.757188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.757375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.757391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.757637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.757652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.757960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.757976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.758241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.758257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.758435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.758451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.758691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.758706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.758902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.758917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.759174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.759189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.759361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.759377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.759499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.759514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.759780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.759795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.760079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.760094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.760304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.760319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.760573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.760588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.760762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.760777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.761037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.761053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.761246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.761261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.761447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.761463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.761636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.761651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.761845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.761859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.762041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.762057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.762321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.762336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.762537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.762551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.762791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.762807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.763115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.028 [2024-07-15 19:40:34.763131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.028 qpair failed and we were unable to recover it. 00:34:24.028 [2024-07-15 19:40:34.763386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.763403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.763665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.763683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.763967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.763982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.764188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.764203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.764382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.764398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.764664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.764679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.764915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.764930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.765169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.765185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.765453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.765468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.765732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.765748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.765938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.765954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.766142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.766158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.766335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.766351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.766557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.766572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.766782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.767000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.767015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.767279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.767295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.767559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.767574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.767782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.767798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.768887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.768902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.769102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.769117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.769426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.769442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.769721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.769736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.770003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.770021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.770146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.770161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.770348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.770363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.770627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.770642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.770886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.770901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.771082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.771097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.771336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.771351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.771592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.771607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.771802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.771818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.772096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.772111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.772373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.772389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.029 qpair failed and we were unable to recover it. 00:34:24.029 [2024-07-15 19:40:34.772569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.029 [2024-07-15 19:40:34.772584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.772774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.772789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.772914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.772929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.773155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.773170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.773436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.773452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.773625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.773898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.773913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.774201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.774217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.774404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.774420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.774707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.774723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.774896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.774911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.775046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.775061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.775301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.775316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.775502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.775518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.775631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.775646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.775772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.775787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.776090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.776108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.776229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.776245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.776445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.776462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.776748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.776763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.776972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.776987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.777255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.777271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.777445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.777460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.777660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.777676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.777870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.777885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.778095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.778111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.778303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.778318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.778502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.778518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.778661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.778676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.778894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.778909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.030 qpair failed and we were unable to recover it. 00:34:24.030 [2024-07-15 19:40:34.779193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.030 [2024-07-15 19:40:34.779210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.779489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.779509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.779777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.779793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.780012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.780028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.780290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.780306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.780495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.780510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.780805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.780820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.781031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.781046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.781185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.781200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.781399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.781415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.781603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.781618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.781855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.781870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.782121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.782137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.782403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.782422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.782614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.782630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.782896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.783200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.783215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.783416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.783431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.783690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.783705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.783882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.783897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.784020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.784035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.784312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.784329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.784529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.784545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.784715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.784730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.784975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.784990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.785203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.785218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.785500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.785515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.785732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.785748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.785941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.786133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.786148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.786365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.786380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.786601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.786616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.786897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.786912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.787187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.787202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.787439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.787455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.787697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.031 [2024-07-15 19:40:34.787712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.031 qpair failed and we were unable to recover it. 00:34:24.031 [2024-07-15 19:40:34.787900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.787915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.788141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.788156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.788396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.788412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.788590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.788606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.788933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.788952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.789166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.789180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.789423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.789436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.789618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.789869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.789880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.790047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.790059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.790313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.790325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.790582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.790594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.790835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.790846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.791048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.791060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.791259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.791272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.791552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.791564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.791822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.792015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.792029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.792153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.792165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.792336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.792348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.792538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.792550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.792807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.792820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1843189 Killed "${NVMF_APP[@]}" "$@" 00:34:24.032 [2024-07-15 19:40:34.792997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.793012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.793266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.793280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.793458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.793470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:24.032 [2024-07-15 19:40:34.793702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.793716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.793953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:24.032 [2024-07-15 19:40:34.793966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.794199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.794212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.794422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.794434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.794687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.794701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:24.032 [2024-07-15 19:40:34.794942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.794955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.795206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.795218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.795475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.795488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.795742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.795938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.795949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.796077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.796293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.796306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.796536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.032 [2024-07-15 19:40:34.796548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.032 qpair failed and we were unable to recover it. 00:34:24.032 [2024-07-15 19:40:34.796759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.796771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:24.033 [2024-07-15 19:40:34.797031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.797046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.797214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.797231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.033 [2024-07-15 19:40:34.797539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.797552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.797786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.797798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.797915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.797927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.798043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.798055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.798325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.798337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.798529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.798541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.798668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.798680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.798877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.798890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.799891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.799903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.800138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.800150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.800280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.800292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.800492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.800504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.800733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.800746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.800928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.800941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.801106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.801121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.801324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.801339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.801600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.801612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.801838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.801850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.802083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.802095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.802402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.802415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.802588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.802600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.802778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.802789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.802996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.803010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.803269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.803281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.803456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.803468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.803587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.033 [2024-07-15 19:40:34.803599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.033 qpair failed and we were unable to recover it. 00:34:24.033 [2024-07-15 19:40:34.803832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.803845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1843910 00:34:24.034 [2024-07-15 19:40:34.804047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.804062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1843910 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:24.034 [2024-07-15 19:40:34.804264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.804279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.804560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.804574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1843910 ']' 00:34:24.034 [2024-07-15 19:40:34.804806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.804819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.034 [2024-07-15 19:40:34.804996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:24.034 [2024-07-15 19:40:34.805256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.034 [2024-07-15 19:40:34.805537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.805659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:24.034 [2024-07-15 19:40:34.805836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.805982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.805996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 19:40:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.034 [2024-07-15 19:40:34.806252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.806266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.806379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.806390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.806513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.806526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.806691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.806703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.806836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.806848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.807040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.807052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.807172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.807184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.807431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.807443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.807568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.807581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.807766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.807780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.808032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.808044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.808239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.808251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.808485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.808500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.808693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.808708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.808939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.808951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.809187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.809200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.809478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.809491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.809617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.809630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.809822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.810076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.810088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.810276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.810288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.810555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.810567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.810770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.810782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.034 [2024-07-15 19:40:34.811012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.034 [2024-07-15 19:40:34.811025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.034 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.811135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.811147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.811384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.811397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.811500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.811512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.811764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.811777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.812910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.812923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.813113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.813127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.813371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.813384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.813500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.813511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.813787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.813799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.814095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.814107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.814235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.814248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.814426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.814439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.814626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.814638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.814892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.814904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.815078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.815090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.815307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.815320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.815497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.815510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.815696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.815708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.815871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.816089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.816103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.816272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.816286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.816470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.816482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.816666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.816678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.816850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.035 [2024-07-15 19:40:34.816862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.035 qpair failed and we were unable to recover it. 00:34:24.035 [2024-07-15 19:40:34.817078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.817091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.817262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.817274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.817444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.817456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.817632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.817644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.817905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.817916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.818914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.818926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.819033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.819044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.819276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.819289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.819469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.819482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.819660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.819673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.819944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.819956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.820118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.820130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.820313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.820325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.820509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.820691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.820703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.820905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.820918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.821183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.821197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.821428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.821440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.821626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.821637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.821890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.821902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.822166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.822178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.822357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.822370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.822624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.822635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.822867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.822879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.036 qpair failed and we were unable to recover it. 00:34:24.036 [2024-07-15 19:40:34.823962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.036 [2024-07-15 19:40:34.823974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.824240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.824253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.824428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.824440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.824642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.824653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.824898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.824909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.825093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.825104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.825382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.825395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.825576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.825589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.825867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.825878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.825991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.826003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.826216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.826233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.826442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.826453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.826618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.826629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.826902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.826913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.827063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.827074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.827313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.827326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.827442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.827453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.827637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.827649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.827768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.827779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.828038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.828050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.828150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.828162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.828425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.828437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.828638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.828650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.828862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.828874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.829073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.829084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.037 [2024-07-15 19:40:34.829280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.037 [2024-07-15 19:40:34.829291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.037 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.829467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.829480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.829660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.829674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.829941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.829953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.830137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.830148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.830323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.830336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.830457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.830469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.830636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.830647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.830765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.830777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.831053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.831065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.831265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.831276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.831414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.831425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.831651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.831663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.831763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.831775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.832976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.832988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.833917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.833928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.834029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.834041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.834229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.834241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.834346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.834358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.320 [2024-07-15 19:40:34.834470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.320 [2024-07-15 19:40:34.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.320 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.834607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.834620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.834726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.834737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.834839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.834851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.835863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.835874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.836917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.836929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.837808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.837988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.838916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.838928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.839033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.839045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.839276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.839289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.839480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.839643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.839655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.839831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.839847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.840944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.840955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.841902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.841915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.842770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.842783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.843831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.843843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.844917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.844928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.845033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.845044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.845160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.845172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.845338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.845350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.321 qpair failed and we were unable to recover it. 00:34:24.321 [2024-07-15 19:40:34.845472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.321 [2024-07-15 19:40:34.845483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.845606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.845618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.845730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.845742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.845961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.845973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.846969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.846981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.847833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.847845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.848109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.848121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.848285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.848297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.848409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.848420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.848598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.848609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.848773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.848786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.849078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.849271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.849455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.849581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.849828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.849997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.850191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.850334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.850527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.850772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.850964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.850975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.851205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.851216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.851423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.851435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.851538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.851549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.851689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.851933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.851945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.852221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.852238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.852402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.852415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.852503] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:34:24.322 [2024-07-15 19:40:34.852548] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.322 [2024-07-15 19:40:34.852616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.852650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.852845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.852860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.852953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.852967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.853153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.853168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.853296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.853312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.853492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.853507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.853623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.853639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.853939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.853955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.854154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.854171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.854401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.854419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.854613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.854630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.854775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.854792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.855058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.855075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.855285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.855302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.855547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.855563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.855684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.855701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.855893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.855910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.856087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.856103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.856373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.856390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.856499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.856704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.856719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.856915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.856930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.857130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.857150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.857254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.857270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.857554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.857569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.857770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.857784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.857908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.857923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.858049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.322 [2024-07-15 19:40:34.858254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.322 [2024-07-15 19:40:34.858267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.322 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.858519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.858530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.858768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.858780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.858897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.858908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.859938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.859949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.860883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.860894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.861070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.861082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.861355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.861367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.861492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.861504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.861688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.861699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.861945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.862141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.862152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.862344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.862357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.862533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.862545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.862791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.862803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.862887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.862899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.863907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.863920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.864980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.864996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.865101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.865117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.865309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.865326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.865593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.865609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.865859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.865875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.866039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.866054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.866171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.866186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.866427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.866443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.866567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.866582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.866824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.866839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.867035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.867294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.867309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.867492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.867508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.867699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.867714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.867905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.867922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.868930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.868941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.869112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.869124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.869302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.323 [2024-07-15 19:40:34.869315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.323 qpair failed and we were unable to recover it. 00:34:24.323 [2024-07-15 19:40:34.869404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.869416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.869528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.869540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.869746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.869758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.869876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.869888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.870917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.870930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.871117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.871294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.871501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.871638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.871820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.871990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.872269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.872464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.872654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.872839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.872939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.872950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.873883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.873998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.874903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.874915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.875982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.875995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.876160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.876172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.876351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.876371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.876477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.876489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.876699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.876710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.876896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.876908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.877948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.877961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.878987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.878998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.879193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.879205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.879415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.879427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.879554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.879567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.879681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.879692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.879931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.879944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.880972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.880985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.324 [2024-07-15 19:40:34.881112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.324 [2024-07-15 19:40:34.881124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.324 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.881240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.881252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.881362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.881373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.881552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.881564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.881805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.881817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.881921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.881933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.882907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.882918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.883966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.883981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.884156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.884171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.884361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.884377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.884612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.884627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.884806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.884821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.884995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc35c000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.885218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.885373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.885584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.885768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.885918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.885930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.886111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.886299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.886514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.886707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.886897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.886992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.887003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.887253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.887265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.887427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.887439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.887547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.887561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.887818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.887830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.888831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.888843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.889847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.889857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:24.325 [2024-07-15 19:40:34.890264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.890924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.890934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.891193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.891203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.891303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.891313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.891502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.891512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.891693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.325 [2024-07-15 19:40:34.891704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.325 qpair failed and we were unable to recover it. 00:34:24.325 [2024-07-15 19:40:34.891947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.891957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.892073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.892083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.892250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.892260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.892442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.892451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.892709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.892719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.892892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.892902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.893932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.893941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.894889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.894898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.895934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.895943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.896912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.896923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.897946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.897956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.898821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.898832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.899885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.899896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.900133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.900143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.900330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.900339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.900525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.900536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.900727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.900738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.900945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.900955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.901853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.901862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.902926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.902936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.903105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.903115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.903305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.903315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.903547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.903557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.903795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.903807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.903984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.326 [2024-07-15 19:40:34.903993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.326 qpair failed and we were unable to recover it. 00:34:24.326 [2024-07-15 19:40:34.904108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.904304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.904432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.904539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.904665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.904946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.904956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.905075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.905085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.905352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.905362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.905541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.905551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.905718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.905931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.905941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.906907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.906991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.907973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.907983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.908096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.908312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.908322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.908578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.908588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.908764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.908773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.908943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.908953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.909959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.909969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.910148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.910159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.910267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.910276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.910447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.910458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.910686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.910698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.910890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.910900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.911966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.911976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.912891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.912901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.913820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.913831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.914943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.914954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.915052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.915062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.915342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.915346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:24.327 [2024-07-15 19:40:34.915353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.915596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.915606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.915744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.915933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.915943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.327 [2024-07-15 19:40:34.916110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.327 [2024-07-15 19:40:34.916120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.327 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.916230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.916241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.916353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.916364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.916465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.916474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.916660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.916671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.916839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.916848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.917769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.917779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.918885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.918895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.919864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.919874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.920897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.920908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.921948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.921957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.922988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.922999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.923145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.923155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.923271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.923282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.923442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.923675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.923906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.923921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.924116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.924132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.924373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.924390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.924519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.924534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.924775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.924790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.925974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.925988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.926116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.926129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.926387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.926401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.926513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.926524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.926755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.926765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.328 [2024-07-15 19:40:34.926933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.328 [2024-07-15 19:40:34.926944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.328 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.927944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.927955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.928888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.928897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.929947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.929958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.930141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.930152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.930330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.930341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.930543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.930554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.930732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.930742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.930867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.930885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.931826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.931841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.932013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.932028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.932217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.932237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.932414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.932428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.932616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.932631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.932896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.932911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.933097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.933112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.933208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.933230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.933445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.933460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.933663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.933678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.933869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.933883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.934049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.934065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.934328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.934349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.934476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.934495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.934680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.934698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.934893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.934910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.935110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.935128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.935373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.935392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.935534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.935551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.935751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.935768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.935897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.935912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.936081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.936098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.936290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.936308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.936448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.936463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.936595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.936611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.936765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.936780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.937029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.937044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.937235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.937251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.937457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.937473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.937763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.937779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.937968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.937983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.938163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.938179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.938363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.938380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.938522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.938538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.938847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.938874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.938987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.938997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.939134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.939145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.329 [2024-07-15 19:40:34.939279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.329 [2024-07-15 19:40:34.939292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.329 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.939394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.939405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.939662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.939673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.939782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.939792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.939979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.939990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.940984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.940995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.941870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.941998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.942794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.942806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.943889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.943899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.944911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.944922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.945141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.945163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.945277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.945291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.945483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.945498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.945688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.945703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.945781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.945795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.946874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.946997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.947148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.947328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.947538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.947637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.947824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.947838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.948945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.948960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.949236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.949250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.330 qpair failed and we were unable to recover it. 00:34:24.330 [2024-07-15 19:40:34.949378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.330 [2024-07-15 19:40:34.949392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.949495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.949509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.949702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.949716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.949908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.949922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.950206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.950220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.950414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.950428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.950678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.950693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.950899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.950913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.951039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.951052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.951235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.951250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.951442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.951457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.951652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.951666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.951881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.951895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.952975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.952986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.953156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.953166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.953266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.953277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.953445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.953455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.953686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.953697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.953900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.953909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.954978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.954989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.955847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.955842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.331 [2024-07-15 19:40:34.955874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.331 [2024-07-15 19:40:34.955881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.331 [2024-07-15 19:40:34.955889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.331 [2024-07-15 19:40:34.955895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.331 [2024-07-15 19:40:34.956030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:24.331 [2024-07-15 19:40:34.956112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:24.331 [2024-07-15 19:40:34.956208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:24.331 [2024-07-15 19:40:34.956339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:24.331 [2024-07-15 19:40:34.956350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.956815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.956997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.957934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.957943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.958124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.958134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.958314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.958325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.958507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.958518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.958662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.958673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.958850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.958861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.959114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.959125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.959298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.959432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.959442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.959625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.959635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.959807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.959817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.960049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.960060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.960325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.960460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.960470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.960705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.960715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.960829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.960840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.961019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.961029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.961195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.961209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.961446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.331 [2024-07-15 19:40:34.961456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.331 qpair failed and we were unable to recover it. 00:34:24.331 [2024-07-15 19:40:34.961624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.961634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.961809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.961819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.962985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.962995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.963106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.963116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.963213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.963228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.963337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.963347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.963517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.963528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.963763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.963774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.964037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.964048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.964116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.964126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.964360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.964372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.964615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.964626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.964857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.964868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.965980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.965990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.966984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.966995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.967182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.967194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.967295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.967305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.967432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.967442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.967646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.967657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.967841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.967853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.968089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.968104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.968335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.968351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.968482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.968492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.968686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.968697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.968889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.968900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.969110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.969121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.969312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.969323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.969493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.969504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.969595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.969606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.969861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.969872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.970963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.970974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.971904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.972153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.972165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.972367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.972378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.972468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.972478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.972671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.972683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.972972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.972983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.973942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.332 [2024-07-15 19:40:34.973953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.332 qpair failed and we were unable to recover it. 00:34:24.332 [2024-07-15 19:40:34.974074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.974085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.974284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.974296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.974466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.974728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.974739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.974868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.974879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.975884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.975895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.976160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.976174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.976355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.976366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.976539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.976558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.976820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.976832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.977031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.977041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.977285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.977295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.977479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.977489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.977701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.977710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.977836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.977845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.978964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.978974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.979095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.979106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.979235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.979245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.979413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.979423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.979702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.979713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.979836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.979847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.980888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.980899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.981910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.981921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.982997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.983008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.983249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.983261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.983506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.983688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.983698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.984964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.985084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.985094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.985305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.985316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.985536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.985546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.333 [2024-07-15 19:40:34.985715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.333 [2024-07-15 19:40:34.985726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.333 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.985876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.985885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.986974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.986984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.987086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.987096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.987328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.987340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.987515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.987526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.987711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.987722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.987895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.987906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.988037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.988048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.988238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.988248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.988493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.988504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.988667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.988677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.988933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.988944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.989979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.989988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.990830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.990997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.991949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.991960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.992967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.992977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.993088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.993097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.993289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.993299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.993512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.993521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.993682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.993692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.993854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.993863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.994954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.994963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.995849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.995859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.996841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.996852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.334 [2024-07-15 19:40:34.997020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.334 [2024-07-15 19:40:34.997030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.334 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.997902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.997912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.998888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.998898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:34.999868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:34.999905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3ef90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.000070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.000101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.000237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.000252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.000469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.000483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.000758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.000771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.000882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.000895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.001017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.001031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.001213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.001469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.001483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.001689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.001702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.001827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.001840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.002947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.002960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.003154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.003169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.003387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.003402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.003515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.003529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.003651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.003664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.003769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.003782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.004828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.004841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.005897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.005911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.006092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.006105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.006316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.006330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.006421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.335 [2024-07-15 19:40:35.006435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.335 qpair failed and we were unable to recover it. 00:34:24.335 [2024-07-15 19:40:35.006615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.006628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.006757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.006771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.006904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.006917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.007980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.007990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.008198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.008208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.008312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.008322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.008515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.008526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.008740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.008751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.008923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.008934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.009194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.009203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.009341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.009351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.009460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.009470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.009728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.009739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.009835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.009845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.010943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.010953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.011948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.011957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.012812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.012822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.013056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.013066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.013239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.336 [2024-07-15 19:40:35.013249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.336 qpair failed and we were unable to recover it. 00:34:24.336 [2024-07-15 19:40:35.013381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.013391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.013569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.013579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.013809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.013819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.014989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.014998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.015233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.015243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.015307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.015316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.015484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.015493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.015607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.015616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.015799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.015808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.016931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.016941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.017865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.017875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.018822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.018832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.019081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.019334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.019473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.019647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.337 [2024-07-15 19:40:35.019742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.337 qpair failed and we were unable to recover it. 00:34:24.337 [2024-07-15 19:40:35.019844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.019854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.019975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.019984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.020153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.020162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.020325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.020335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.020517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.020527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.020688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.020700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.020867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.020876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.021852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.021861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.022902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.023815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.023992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.024810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.024819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.025900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.025910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.338 [2024-07-15 19:40:35.026029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.338 [2024-07-15 19:40:35.026039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.338 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.026246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.026256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.026333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.026343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.026546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.026555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.026746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.026756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.026935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.026945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.027908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.027918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.028899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.028909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.029872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.029882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.030058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.030067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.030262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.030272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.030395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.030404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.030660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.030669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.030789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.030798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.031987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.031997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.032106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.032116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.032217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.032230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.339 [2024-07-15 19:40:35.032345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.339 [2024-07-15 19:40:35.032354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.339 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.032459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.032468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.032580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.032590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.032852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.032862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.032974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.032984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.033816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.033826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.034967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.034976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.035952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.035962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.036127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.036368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.036378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.036556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.036565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.036754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.036764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.036899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.036910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.340 [2024-07-15 19:40:35.037768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.340 qpair failed and we were unable to recover it. 00:34:24.340 [2024-07-15 19:40:35.037867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.037877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.038950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.039124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.039133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.039322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.039332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.039501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.039511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.039675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.039685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.039790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.039799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.040967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.040976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.041959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.041969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.042088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.042098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.042259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.042269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.042467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.042476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.042683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.042693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.042875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.042884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.043973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.043982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.044184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.044193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.341 [2024-07-15 19:40:35.044313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.341 [2024-07-15 19:40:35.044324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.341 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.044503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.044512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.044767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.044776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.044888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.044897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.045857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.045867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.046944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.046954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.047969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.047979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.048955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.048965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.049841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.049850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.050084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.050093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.050323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.050333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.050575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.050585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.050768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.050777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.342 qpair failed and we were unable to recover it. 00:34:24.342 [2024-07-15 19:40:35.050893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.342 [2024-07-15 19:40:35.050902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.051925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.051935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.052960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.052970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.053916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.053928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.054882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.054891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.055922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.055932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.056110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.056120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.056284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.056294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.056523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.056532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.056703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.056713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.056881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.056891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.057030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.057040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.057213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.057222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.057415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.343 [2024-07-15 19:40:35.057424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.343 qpair failed and we were unable to recover it. 00:34:24.343 [2024-07-15 19:40:35.057523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.057533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.057715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.057725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.057890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.057899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.057965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.058140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.058336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.058346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.058511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.058521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.058626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.058635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.058814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.058823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.059860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.059870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.060930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.060939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.061937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.062124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.062314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.062324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.062513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.062523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.062643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.062652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.062907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.062917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.063038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.063048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.344 qpair failed and we were unable to recover it. 00:34:24.344 [2024-07-15 19:40:35.063215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.344 [2024-07-15 19:40:35.063227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.063407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.063417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.063552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.063561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.063727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.064909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.064919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.065989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.065998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.066165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.066175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.066352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.066362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.066537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.066546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.066740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.066751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.066939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.066949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.067149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.067158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.067325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.067335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.067479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.067488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.067662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.067928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.067938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.068192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.068202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.068372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.068382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.068506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.068516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.068710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.068719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.068973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.068983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.069121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.069305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.069315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.069480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.069490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.069634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.069643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.069764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.069774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.070004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.070014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.345 [2024-07-15 19:40:35.070118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.345 [2024-07-15 19:40:35.070127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.345 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.070287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.070298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.070471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.070480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.070665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.070674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.070747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.070757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.070974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.070984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.071915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.071925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.072156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.072166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.072382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.072393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:24.346 [2024-07-15 19:40:35.072622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.072635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.072766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.072778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:24.346 [2024-07-15 19:40:35.073056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.073066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:24.346 [2024-07-15 19:40:35.073212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.073226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.073368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.073378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:24.346 [2024-07-15 19:40:35.073547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.346 [2024-07-15 19:40:35.073814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.073826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.074077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.074087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.074215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.074229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.074362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.074372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.074632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.074642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.074823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.074834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.075876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.075886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.076940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.076949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.346 [2024-07-15 19:40:35.077059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.346 [2024-07-15 19:40:35.077069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.346 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.077987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.077997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.078119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.078289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.078492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.078625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.078740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.078993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.079199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.079344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.079434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.079672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.079890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.079900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.080945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.080955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.081973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.081982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.082912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.082922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.083023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.083034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.083150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.083162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.083368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.083379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.083489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.083499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.347 qpair failed and we were unable to recover it. 00:34:24.347 [2024-07-15 19:40:35.083609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.347 [2024-07-15 19:40:35.083619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.083785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.083795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.083981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.083991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.084947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.084957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.085930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.085941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.086880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.086890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.087952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.087961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.088886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.348 [2024-07-15 19:40:35.088896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.348 qpair failed and we were unable to recover it. 00:34:24.348 [2024-07-15 19:40:35.089030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.089974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.089985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.090949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.090958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.091924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.091934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.092099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.092109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.092281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.092290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.092394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.092404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.092594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.092604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.092792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.092802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.093960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.093970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.094149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.094159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.094352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.094362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.094590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.094600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.094744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.094754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.094867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.094876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.349 [2024-07-15 19:40:35.095798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.349 [2024-07-15 19:40:35.095808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.349 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.095971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.095980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.096882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.096993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.097982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.097992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.098897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.098907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.099941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.099951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.100924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.100934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.101988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.101997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.350 qpair failed and we were unable to recover it. 00:34:24.350 [2024-07-15 19:40:35.102144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.350 [2024-07-15 19:40:35.102153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.102270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.102280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.102445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.102456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.102626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.102637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.102873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.102886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.103915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.103925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.104778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.105856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.105866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.351 [2024-07-15 19:40:35.106034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.106233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.106362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:24.351 [2024-07-15 19:40:35.106559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.106746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.351 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.106880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.106903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.351 [2024-07-15 19:40:35.107031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.107954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.107969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.108894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.108907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.109030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.351 [2024-07-15 19:40:35.109044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.351 qpair failed and we were unable to recover it. 00:34:24.351 [2024-07-15 19:40:35.109163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.109176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.109383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.109397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.109570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.109583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.109697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.109711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.109821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.109834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.110899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.110912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc36c000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.111987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.111997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.112935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.112944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.113952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.113962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.114854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.114990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.115000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.115114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.115124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.115320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.115331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.115499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.352 [2024-07-15 19:40:35.115511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.352 qpair failed and we were unable to recover it. 00:34:24.352 [2024-07-15 19:40:35.115643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.115653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.115762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.115772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.115889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.115899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.116983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.116994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.117950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.117961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.118952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.118964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.119208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.119331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.119511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.119692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.119810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.119990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.353 [2024-07-15 19:40:35.120851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.353 qpair failed and we were unable to recover it. 00:34:24.353 [2024-07-15 19:40:35.120950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.120960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.121137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.121147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.121324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.121335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.121572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.121582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.121689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.121699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.121886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.121896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.122933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.122942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.123942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.123953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.124975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.124985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.125972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.125982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.126825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.126994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.127003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.127200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.127430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.354 [2024-07-15 19:40:35.127440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.354 qpair failed and we were unable to recover it. 00:34:24.354 [2024-07-15 19:40:35.127609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.127621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 Malloc0 00:34:24.355 [2024-07-15 19:40:35.127730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.127740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.127856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.127866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.127964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.127981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.355 [2024-07-15 19:40:35.128159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:24.355 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.355 [2024-07-15 19:40:35.128424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.355 [2024-07-15 19:40:35.128547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.128662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.128860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.128979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.128989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.129984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.129994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.130815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.130995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.131265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.131400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.355 [2024-07-15 19:40:35.131443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.131618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.131775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.131909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.132116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.132241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.132428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.132630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.132820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.132999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.133172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.133300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.133509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.133597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.355 qpair failed and we were unable to recover it. 00:34:24.355 [2024-07-15 19:40:35.133786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.355 [2024-07-15 19:40:35.133796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.133944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.133954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.134947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.134957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.135951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.135960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.136062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.136071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.136340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.136351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.136520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.136531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.356 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:24.356 [2024-07-15 19:40:35.136711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.136721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.356 [2024-07-15 19:40:35.136868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.136878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.356 [2024-07-15 19:40:35.137013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.137023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.137254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.137264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.137516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.137526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.137798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.137809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.138024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.138219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.138469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.138608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.138816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.138995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.139847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.139857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.140017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.140027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.140202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.140212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.140403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.140413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.356 qpair failed and we were unable to recover it. 00:34:24.356 [2024-07-15 19:40:35.140589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.356 [2024-07-15 19:40:35.140599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.140736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.140746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.140875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.140885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.141842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.141852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.142042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.142240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.142414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.142590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.142803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.142990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.143232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.143413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.143586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.143800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.143916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.144123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.144298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.144446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.357 [2024-07-15 19:40:35.144565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:24.357 [2024-07-15 19:40:35.144693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.357 [2024-07-15 19:40:35.144800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.144811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.357 [2024-07-15 19:40:35.144991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.145166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.145353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.145489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.145668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.145859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.357 [2024-07-15 19:40:35.145869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.357 qpair failed and we were unable to recover it. 00:34:24.357 [2024-07-15 19:40:35.146073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.146083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.146316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.146327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.146564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.146574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.146758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.146768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.146884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.146894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.147942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.147952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.148933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.148943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.358 [2024-07-15 19:40:35.149764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.358 [2024-07-15 19:40:35.149774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.358 qpair failed and we were unable to recover it. 00:34:24.619 [2024-07-15 19:40:35.149887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.619 [2024-07-15 19:40:35.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.619 qpair failed and we were unable to recover it. 00:34:24.619 [2024-07-15 19:40:35.150063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.619 [2024-07-15 19:40:35.150073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.619 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.150348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.150546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.150556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.150720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.150730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.150904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.150914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.151916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.151926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.152111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.152315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.152491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.620 [2024-07-15 19:40:35.152611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.620 [2024-07-15 19:40:35.152782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.620 [2024-07-15 19:40:35.152900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.152910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.620 [2024-07-15 19:40:35.153030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.153145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.153265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.153510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.153680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.153935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.153945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.154869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.154999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.155011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.155201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.155211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.155406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.155418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.155542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.155552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.155819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.155829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.155997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.156007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.156136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.156309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.620 [2024-07-15 19:40:35.156320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc364000b90 with addr=10.0.0.2, port=4420 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.156440] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.620 [2024-07-15 19:40:35.161941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.162026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.162045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.162052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.162058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.162078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.620 19:40:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1843220 00:34:24.620 [2024-07-15 19:40:35.171954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.172032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.172048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.172055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.172061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.172076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.181956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.182024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.182042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.182049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.182054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.182069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.191881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.191997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.192013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.192020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.192026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.192041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.201879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.201947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.201962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.201969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.201974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.201989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.211957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.212023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.212038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.212044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.212050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.212065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.221958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.222024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.222039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.222046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.222052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.222067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.231984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.232052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.232067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.232074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.232080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.232094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.242017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.242127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.242143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.242150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.242156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.242171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.252046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.252117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.252131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.252138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.252144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.252160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.262062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.262127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.262142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.262149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.262154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.262169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.272149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.272218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.272241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.272248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.272253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.620 [2024-07-15 19:40:35.272268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.620 qpair failed and we were unable to recover it. 00:34:24.620 [2024-07-15 19:40:35.282110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.620 [2024-07-15 19:40:35.282183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.620 [2024-07-15 19:40:35.282198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.620 [2024-07-15 19:40:35.282204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.620 [2024-07-15 19:40:35.282210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.282229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.292240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.292307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.292322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.292329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.292334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.292350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.302151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.302245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.302260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.302266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.302272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.302287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.312243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.312334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.312348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.312355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.312361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.312379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.322267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.322335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.322350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.322357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.322363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.322377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.332284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.332346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.332361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.332367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.332374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.332388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.342346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.342456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.342471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.342477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.342483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.342498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.352286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.352351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.352365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.352372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.352378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.352392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.362461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.362544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.362561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.362567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.362573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.362587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.372456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.372562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.372584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.372590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.372596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.372610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.382452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.382562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.382576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.382582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.382588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.382603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.392465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.392554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.392568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.392575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.392581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.392595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.402509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.402587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.402602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.402608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.402617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.402632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.412531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.412603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.412618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.412624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.412630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.412644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.422496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.422562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.422577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.422583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.422589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.422603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.432515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.432579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.432593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.432600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.432606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.432620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.442606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.442701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.442715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.442722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.442728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.442742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.452642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.452706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.452720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.452727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.452733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.452747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.621 [2024-07-15 19:40:35.462704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.621 [2024-07-15 19:40:35.462770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.621 [2024-07-15 19:40:35.462784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.621 [2024-07-15 19:40:35.462790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.621 [2024-07-15 19:40:35.462796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.621 [2024-07-15 19:40:35.462810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.621 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.472671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.472739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.472753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.882 [2024-07-15 19:40:35.472760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.882 [2024-07-15 19:40:35.472767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.882 [2024-07-15 19:40:35.472781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.882 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.482642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.482707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.482722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.882 [2024-07-15 19:40:35.482728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.882 [2024-07-15 19:40:35.482734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.882 [2024-07-15 19:40:35.482748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.882 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.492671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.492732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.492746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.882 [2024-07-15 19:40:35.492756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.882 [2024-07-15 19:40:35.492762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.882 [2024-07-15 19:40:35.492776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.882 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.502795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.502858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.502871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.882 [2024-07-15 19:40:35.502878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.882 [2024-07-15 19:40:35.502884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.882 [2024-07-15 19:40:35.502898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.882 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.512783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.512847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.512861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.882 [2024-07-15 19:40:35.512867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.882 [2024-07-15 19:40:35.512873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.882 [2024-07-15 19:40:35.512887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.882 qpair failed and we were unable to recover it. 00:34:24.882 [2024-07-15 19:40:35.522796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.882 [2024-07-15 19:40:35.522865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.882 [2024-07-15 19:40:35.522880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.522887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.522893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.522907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.532894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.533004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.533019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.533026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.533032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.533046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.542914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.542980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.542994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.543001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.543006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.543021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.552910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.552984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.552998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.553005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.553010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.553024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.562909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.562979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.562994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.563003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.563008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.563024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.572889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.572957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.572971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.572977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.572983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.572997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.583002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.583069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.583085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.583095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.583100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.583115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.593022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.593123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.593137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.593144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.593151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.593165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.603030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.603141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.603157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.603163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.603170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.603184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.613129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.613228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.613243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.613250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.613256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.613270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.623136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.623203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.623218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.623228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.623236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.623251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.633134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.633201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.633216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.633222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.633233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.633248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.643158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.643239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.643254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.643260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.643266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.643280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.653167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.653237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.653252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.653258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.653264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.883 [2024-07-15 19:40:35.653279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.883 qpair failed and we were unable to recover it. 00:34:24.883 [2024-07-15 19:40:35.663210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.883 [2024-07-15 19:40:35.663276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.883 [2024-07-15 19:40:35.663291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.883 [2024-07-15 19:40:35.663297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.883 [2024-07-15 19:40:35.663303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.663317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.673249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.673317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.673335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.673341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.673347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.673361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.683284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.683354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.683369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.683375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.683381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.683395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.693324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.693388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.693403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.693409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.693415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.693429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.703332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.703441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.703456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.703463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.703468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.703484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.713413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.713477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.713491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.713498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.713503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.713523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.723352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.723418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.723433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.723440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.723445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.723460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:24.884 [2024-07-15 19:40:35.733420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.884 [2024-07-15 19:40:35.733512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.884 [2024-07-15 19:40:35.733527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.884 [2024-07-15 19:40:35.733533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.884 [2024-07-15 19:40:35.733539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:24.884 [2024-07-15 19:40:35.733554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.884 qpair failed and we were unable to recover it. 00:34:25.144 [2024-07-15 19:40:35.743492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.144 [2024-07-15 19:40:35.743565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.144 [2024-07-15 19:40:35.743580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.144 [2024-07-15 19:40:35.743586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.144 [2024-07-15 19:40:35.743593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.144 [2024-07-15 19:40:35.743607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.144 qpair failed and we were unable to recover it. 00:34:25.144 [2024-07-15 19:40:35.753455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.144 [2024-07-15 19:40:35.753572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.144 [2024-07-15 19:40:35.753587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.144 [2024-07-15 19:40:35.753594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.753600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.753614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.763503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.763565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.763582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.763589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.763595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.763609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.773572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.773683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.773698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.773704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.773710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.773725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.783543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.783609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.783623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.783630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.783636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.783649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.793578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.793640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.793655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.793661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.793667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.793681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.803620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.803685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.803699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.803705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.803714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.803728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.813684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.813756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.813771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.813777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.813784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.813798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.823672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.823734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.823748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.823754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.823760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.823774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.833691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.833761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.833775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.833781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.833787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.833801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.843735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.843801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.843815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.843821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.843827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.843842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.853741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.853807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.853821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.853827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.853833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.853847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.863785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.863850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.863864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.863871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.863877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.863891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.873813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.873880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.873894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.873900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.873906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.873920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.883870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.145 [2024-07-15 19:40:35.883932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.145 [2024-07-15 19:40:35.883947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.145 [2024-07-15 19:40:35.883954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.145 [2024-07-15 19:40:35.883960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.145 [2024-07-15 19:40:35.883974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.145 qpair failed and we were unable to recover it. 00:34:25.145 [2024-07-15 19:40:35.893925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.894032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.894054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.894061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.894070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.894084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.903947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.904059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.904075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.904081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.904087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.904102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.913917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.913983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.913997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.914004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.914010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.914024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.923879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.923942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.923956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.923963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.923969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.923983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.933972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.934034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.934049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.934055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.934061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.934075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.944021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.944096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.944110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.944117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.944123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.944137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.954030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.954095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.954108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.954115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.954121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.954135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.964190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.964271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.964285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.964292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.964297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.964311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.974067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.974154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.974168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.974174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.974180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.974194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.984121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.984186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.984200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.984209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.984215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.984233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.146 [2024-07-15 19:40:35.994173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.146 [2024-07-15 19:40:35.994245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.146 [2024-07-15 19:40:35.994260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.146 [2024-07-15 19:40:35.994267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.146 [2024-07-15 19:40:35.994273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.146 [2024-07-15 19:40:35.994287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.146 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.004194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.004311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.004326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.004333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.004340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.004356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.014193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.014300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.014315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.014322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.014328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.014344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.024235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.024301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.024316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.024323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.024329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.024344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.034300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.034367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.034381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.034388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.034394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.034408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.044354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.044428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.044442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.044449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.044454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.044469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.054264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.054329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.054343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.054349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.054355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.054370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.064335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.064400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.064415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.064421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.064427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.064441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.074392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.074457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.074474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.074480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.074486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.074500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.084391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.084454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.084468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.084475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.084480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.407 [2024-07-15 19:40:36.084494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.407 qpair failed and we were unable to recover it. 00:34:25.407 [2024-07-15 19:40:36.094447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.407 [2024-07-15 19:40:36.094510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.407 [2024-07-15 19:40:36.094524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.407 [2024-07-15 19:40:36.094531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.407 [2024-07-15 19:40:36.094537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.094551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.104499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.104561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.104575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.104581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.104587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.104601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.114490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.114553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.114568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.114574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.114580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.114597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.124493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.124563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.124578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.124584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.124590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.124604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.134605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.134688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.134703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.134709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.134715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.134729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.144574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.144639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.144654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.144660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.144666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.144680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.154629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.154736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.154759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.154766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.154772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.154786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.164675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.164741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.164758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.164764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.164770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.164784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.174655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.174719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.174733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.174739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.174746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.174760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.184737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.184800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.184814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.184821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.184827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.184841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.194722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.194787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.194801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.194807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.194813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.194828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.204751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.204819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.204833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.204840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.204849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.204863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.214796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.214888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.214902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.214909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.214915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.214929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.224808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.224888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.224903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.224909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.224915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.408 [2024-07-15 19:40:36.224929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.408 qpair failed and we were unable to recover it. 00:34:25.408 [2024-07-15 19:40:36.234886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.408 [2024-07-15 19:40:36.234949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.408 [2024-07-15 19:40:36.234964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.408 [2024-07-15 19:40:36.234970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.408 [2024-07-15 19:40:36.234976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.409 [2024-07-15 19:40:36.234990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.409 qpair failed and we were unable to recover it. 00:34:25.409 [2024-07-15 19:40:36.244873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.409 [2024-07-15 19:40:36.244939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.409 [2024-07-15 19:40:36.244954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.409 [2024-07-15 19:40:36.244960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.409 [2024-07-15 19:40:36.244966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.409 [2024-07-15 19:40:36.244981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.409 qpair failed and we were unable to recover it. 00:34:25.409 [2024-07-15 19:40:36.254895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.409 [2024-07-15 19:40:36.255009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.409 [2024-07-15 19:40:36.255024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.409 [2024-07-15 19:40:36.255031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.409 [2024-07-15 19:40:36.255037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.409 [2024-07-15 19:40:36.255051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.409 qpair failed and we were unable to recover it. 00:34:25.669 [2024-07-15 19:40:36.264944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.669 [2024-07-15 19:40:36.265011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.669 [2024-07-15 19:40:36.265025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.669 [2024-07-15 19:40:36.265032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.669 [2024-07-15 19:40:36.265038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.669 [2024-07-15 19:40:36.265052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.669 qpair failed and we were unable to recover it. 00:34:25.669 [2024-07-15 19:40:36.274958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.669 [2024-07-15 19:40:36.275024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.669 [2024-07-15 19:40:36.275038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.669 [2024-07-15 19:40:36.275044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.669 [2024-07-15 19:40:36.275050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.669 [2024-07-15 19:40:36.275063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.669 qpair failed and we were unable to recover it. 00:34:25.669 [2024-07-15 19:40:36.284997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.669 [2024-07-15 19:40:36.285065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.669 [2024-07-15 19:40:36.285080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.669 [2024-07-15 19:40:36.285086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.669 [2024-07-15 19:40:36.285092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.285106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.295019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.295132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.295147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.295153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.295161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.295176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.305060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.305163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.305186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.305192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.305198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.305213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.315059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.315126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.315140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.315147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.315152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.315166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.325133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.325246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.325261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.325268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.325274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.325290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.335129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.335192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.335206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.335212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.335218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.335236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.345156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.345266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.345280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.345287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.345293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.345307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.355194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.355260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.355277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.355285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.355294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.355311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.365250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.365322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.365336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.365342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.365348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.365362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.375255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.375337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.375351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.375358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.375363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.375377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.385314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.385383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.385397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.385407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.670 [2024-07-15 19:40:36.385412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.670 [2024-07-15 19:40:36.385427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.670 qpair failed and we were unable to recover it. 00:34:25.670 [2024-07-15 19:40:36.395287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.670 [2024-07-15 19:40:36.395356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.670 [2024-07-15 19:40:36.395371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.670 [2024-07-15 19:40:36.395377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.395383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.395397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.405336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.405410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.405423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.405430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.405435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.405449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.415348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.415414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.415428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.415435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.415441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.415455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.425398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.425463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.425477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.425483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.425489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.425503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.435447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.435528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.435542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.435549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.435554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.435568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.445456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.445518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.445532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.445538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.445544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.445558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.455488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.455575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.455590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.455596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.455601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.455616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.465532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.465595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.465609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.465615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.465621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.465635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.475525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.475587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.475606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.475612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.475618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.475632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.485561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.485628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.485642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.485649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.485655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.485668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.495590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.495653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.495667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.495674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.495680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.495694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.505654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.505718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.505732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.505739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.505744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.505759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.671 [2024-07-15 19:40:36.515573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.671 [2024-07-15 19:40:36.515636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.671 [2024-07-15 19:40:36.515650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.671 [2024-07-15 19:40:36.515656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.671 [2024-07-15 19:40:36.515662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.671 [2024-07-15 19:40:36.515679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.671 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.525654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.525722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.525737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.525743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.525749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.525763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.535769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.535834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.535848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.535855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.535860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.535874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.545731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.545792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.545806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.545813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.545818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.545832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.555834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.555911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.555926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.555932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.555938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.555951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.565794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.565860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.565877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.565884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.565890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.565903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.575855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.575920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.575935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.575942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.575947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.575961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.585877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.585944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.585958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.585966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.585974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.585987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.595810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.595875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.595889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.595895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.595901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.595915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.605876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.605941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.605955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.605961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.605967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.605984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.615933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.615992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.616006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.616012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.616018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.616032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.625960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.626022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.626037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.626043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.626048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.626062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.636048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.636132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.636147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.636153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.636159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.932 [2024-07-15 19:40:36.636172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.932 qpair failed and we were unable to recover it. 00:34:25.932 [2024-07-15 19:40:36.646019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.932 [2024-07-15 19:40:36.646087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.932 [2024-07-15 19:40:36.646102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.932 [2024-07-15 19:40:36.646108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.932 [2024-07-15 19:40:36.646114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.646128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.656048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.656118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.656132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.656138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.656144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.656158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.666077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.666140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.666154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.666160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.666166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.666180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.676117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.676182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.676197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.676203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.676209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.676223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.686133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.686233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.686247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.686254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.686260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.686274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.696178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.696245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.696259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.696265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.696274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.696289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.706170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.706237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.706252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.706258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.706265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.706279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.716236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.716355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.716371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.716378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.716387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.716402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.726166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.726238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.726253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.726260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.726266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.726281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.736262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.736326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.736341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.736347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.736353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc364000b90 00:34:25.933 [2024-07-15 19:40:36.736368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.746337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.746420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.746448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.746459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.746468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:25.933 [2024-07-15 19:40:36.746491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.756291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.756377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.756395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.756402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.756408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:25.933 [2024-07-15 19:40:36.756424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.766302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.766371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.766387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.766396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.766403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:25.933 [2024-07-15 19:40:36.766418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.933 qpair failed and we were unable to recover it. 00:34:25.933 [2024-07-15 19:40:36.776316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.933 [2024-07-15 19:40:36.776384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.933 [2024-07-15 19:40:36.776400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.933 [2024-07-15 19:40:36.776407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.933 [2024-07-15 19:40:36.776415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:25.933 [2024-07-15 19:40:36.776430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.933 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-15 19:40:36.786435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.194 [2024-07-15 19:40:36.786502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.194 [2024-07-15 19:40:36.786518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.194 [2024-07-15 19:40:36.786528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.194 [2024-07-15 19:40:36.786534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.194 [2024-07-15 19:40:36.786548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-15 19:40:36.796464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.194 [2024-07-15 19:40:36.796542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.194 [2024-07-15 19:40:36.796557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.796563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.796569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.796583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.806416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.806490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.806507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.806515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.806521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.806535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.816450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.816543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.816560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.816567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.816573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.816587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.826517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.826578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.826593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.826600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.826607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.826620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.836563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.836629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.836644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.836651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.836657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.836671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.846593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.846707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.846724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.846731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.846737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.846752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.856545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.856604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.856619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.856626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.856632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.856647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.866579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.866639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.866654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.866661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.866667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.866681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.876605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.876670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.876684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.876694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.876700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.876714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.886632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.886693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.886708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.886715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.886721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.886735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.896704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.896772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.896787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.896793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.896799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.896813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.906713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.906781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.906796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.906802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.906808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.906821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.916826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.916896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.916910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.916917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.916922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.916936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.926798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.926865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.926880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.926887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.926892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.926906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-15 19:40:36.936893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.195 [2024-07-15 19:40:36.936981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.195 [2024-07-15 19:40:36.936996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.195 [2024-07-15 19:40:36.937002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.195 [2024-07-15 19:40:36.937008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.195 [2024-07-15 19:40:36.937022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.946862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.946928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.946944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.946951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.946957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.946971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.956907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.956969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.956984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.956991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.956997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.957011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.966927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.966997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.967012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.967022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.967028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.967042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.976912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.977010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.977025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.977032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.977039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.977053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.986943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.987021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.987036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.987043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.987049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.987062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:36.997032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:36.997111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:36.997126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:36.997132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:36.997138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:36.997153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:37.006975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:37.007049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:37.007065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:37.007071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:37.007078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:37.007093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:37.017018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:37.017082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:37.017097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:37.017104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:37.017110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:37.017124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:37.027095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:37.027162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:37.027177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:37.027184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:37.027190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:37.027204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:37.037143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:37.037229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:37.037244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:37.037251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:37.037258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:37.037271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-15 19:40:37.047153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.196 [2024-07-15 19:40:37.047229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.196 [2024-07-15 19:40:37.047245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.196 [2024-07-15 19:40:37.047252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.196 [2024-07-15 19:40:37.047258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.196 [2024-07-15 19:40:37.047272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.057116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.057177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.057195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.057202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.057208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.057222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.067173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.067298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.067315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.067322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.067327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.067342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.077187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.077256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.077272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.077278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.077284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.077298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.087236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.087312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.087328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.087334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.087340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.087354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.097343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.097416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.097431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.097438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.097444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.097461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.107263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.107328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.107343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.107350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.107356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.107370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.117305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.117373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.117388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.117394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.117400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.117414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.127319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.127422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.127437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.127444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.127450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.127464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.137474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.137542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.137557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.137563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.137569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.137583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.147444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.457 [2024-07-15 19:40:37.147510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.457 [2024-07-15 19:40:37.147531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.457 [2024-07-15 19:40:37.147541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.457 [2024-07-15 19:40:37.147547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.457 [2024-07-15 19:40:37.147561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.457 qpair failed and we were unable to recover it. 00:34:26.457 [2024-07-15 19:40:37.157435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.157501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.157517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.157523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.157529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.157543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.167448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.167517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.167532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.167539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.167545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.167559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.177530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.177635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.177657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.177664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.177670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.177684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.187616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.187722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.187738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.187745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.187751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.187769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.197612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.197696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.197711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.197717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.197723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.197737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.207542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.207611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.207626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.207633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.207639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.207653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.217641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.217703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.217717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.217724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.217729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.217743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.227703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.227773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.227787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.227794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.227799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.227813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.237733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.237797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.237814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.237821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.237826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.237840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.247775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.247843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.247858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.247864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.247870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.247884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.257788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.257854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.257869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.257875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.257881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.257894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.267824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.267901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.267915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.267922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.267927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.267941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.277772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.277837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.277851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.277858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.277864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.277881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.287865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.287925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.287940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.287947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.287953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.287966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.297903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.298011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.298027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.298034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.298040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.298054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.458 [2024-07-15 19:40:37.307889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.458 [2024-07-15 19:40:37.307950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.458 [2024-07-15 19:40:37.307965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.458 [2024-07-15 19:40:37.307971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.458 [2024-07-15 19:40:37.307977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.458 [2024-07-15 19:40:37.307991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.458 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.317954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.318021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.318036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.318043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.318050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.318063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.327962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.328025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.328043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.328049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.328055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.328069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.338032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.338100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.338115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.338121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.338127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.338141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.348035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.348100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.348117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.348124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.348130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.348144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.358075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.358141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.358157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.358164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.358170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.358183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.368019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.368088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.368103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.368109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.368118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.368132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.378135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.378205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.378220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.378231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.378237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.378251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.388161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.388227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.388243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.388249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.388255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.388269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.398196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.398269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.398284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.398290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.398296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.398310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.408195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.719 [2024-07-15 19:40:37.408265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.719 [2024-07-15 19:40:37.408280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.719 [2024-07-15 19:40:37.408286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.719 [2024-07-15 19:40:37.408292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.719 [2024-07-15 19:40:37.408306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.719 qpair failed and we were unable to recover it. 00:34:26.719 [2024-07-15 19:40:37.418252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.418327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.418343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.418350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.418355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.418369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.428212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.428279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.428294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.428301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.428306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.428320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.438306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.438370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.438385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.438392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.438397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.438411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.448333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.448402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.448417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.448424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.448430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.448444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.458346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.458411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.458425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.458432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.458441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.458455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.468442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.468508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.468523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.468530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.468536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.468549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.478409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.478474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.478488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.478495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.478501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.478515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.488427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.488491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.488505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.488512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.488518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.488531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.498475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.498546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.498561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.498568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.498573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.498587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.508513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.508587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.508601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.508607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.508613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.508627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.518569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.518634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.518649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.518655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.518661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.518674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.528490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.528563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.528578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.528584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.528590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.528603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.538571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.538632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.538647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.538654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.538659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.538674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.548690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.548774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.548789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.720 [2024-07-15 19:40:37.548799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.720 [2024-07-15 19:40:37.548804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.720 [2024-07-15 19:40:37.548818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.720 qpair failed and we were unable to recover it. 00:34:26.720 [2024-07-15 19:40:37.558678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.720 [2024-07-15 19:40:37.558748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.720 [2024-07-15 19:40:37.558763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.721 [2024-07-15 19:40:37.558769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.721 [2024-07-15 19:40:37.558775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.721 [2024-07-15 19:40:37.558789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.721 qpair failed and we were unable to recover it. 00:34:26.721 [2024-07-15 19:40:37.568675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.721 [2024-07-15 19:40:37.568748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.721 [2024-07-15 19:40:37.568763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.721 [2024-07-15 19:40:37.568769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.721 [2024-07-15 19:40:37.568775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.721 [2024-07-15 19:40:37.568788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.721 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.578622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.578685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.578699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.578706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.578712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.578726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.588737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.588803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.588817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.588824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.588830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.588843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.598763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.598828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.598842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.598849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.598854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.598868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.608713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.608783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.608798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.608804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.608810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.608823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.618729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.618792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.618806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.618813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.618818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.618832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.628905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.628970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.628985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.628992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.628997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.629011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.638808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.638905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.638919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.638929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.638935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.638949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.648902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.648980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.648996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.649002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.649008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.649022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.658933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.658999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.659014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.659021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.659026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.659041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.668897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.668962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.668977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.668984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.668990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.669004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.679002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.679069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.679083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.679090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.679096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.679109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.689045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.689110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.689125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.689131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.689137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.689151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.699053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.699116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.699130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.699137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.699143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.699156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.982 qpair failed and we were unable to recover it. 00:34:26.982 [2024-07-15 19:40:37.709061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.982 [2024-07-15 19:40:37.709125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.982 [2024-07-15 19:40:37.709140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.982 [2024-07-15 19:40:37.709146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.982 [2024-07-15 19:40:37.709151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.982 [2024-07-15 19:40:37.709165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.719160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.719248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.719264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.719270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.719276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.719290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.729137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.729204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.729220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.729234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.729240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.729254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.739119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.739182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.739197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.739203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.739210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.739227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.749182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.749253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.749270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.749277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.749283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.749297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.759216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.759291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.759307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.759313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.759319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.759333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.769249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.769311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.769326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.769333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.769338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.769352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.779288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.779388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.779403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.779410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.779416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.779430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.789317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.789425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.789445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.789452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.789459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.789473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.799356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.799421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.799436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.799443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.799448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.799463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.809359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.809465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.809489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.809496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.809502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.809517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.819397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.819508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.819528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.819535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.819542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.819557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:26.983 [2024-07-15 19:40:37.829458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.983 [2024-07-15 19:40:37.829524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.983 [2024-07-15 19:40:37.829539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.983 [2024-07-15 19:40:37.829546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.983 [2024-07-15 19:40:37.829552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:26.983 [2024-07-15 19:40:37.829566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.983 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.839504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.839572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.839587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.839593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.839599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.839613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.849478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.849545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.849562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.849568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.849574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.849588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.859500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.859567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.859583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.859589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.859595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.859609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.869555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.869615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.869631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.869638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.869644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.869658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.879578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.879651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.879667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.879673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.879679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.879693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.889616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.889680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.889696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.889702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.889708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.889722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.899619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.899687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.899702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.899709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.899715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.899729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.909643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.909709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.909727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.909733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.909739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.909753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.919692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.919761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.919776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.919783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.919789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.919802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.929717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.929787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.929802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.929808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.929814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.929828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.939767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.939828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.939843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.939849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.939855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.939869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.949783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.949846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.949862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.949869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.949875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.244 [2024-07-15 19:40:37.949892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.244 qpair failed and we were unable to recover it. 00:34:27.244 [2024-07-15 19:40:37.959803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.244 [2024-07-15 19:40:37.959883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.244 [2024-07-15 19:40:37.959898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.244 [2024-07-15 19:40:37.959904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.244 [2024-07-15 19:40:37.959910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:37.959925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:37.969919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:37.970043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:37.970059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:37.970065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:37.970071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:37.970086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:37.979910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:37.979984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:37.979998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:37.980005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:37.980011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:37.980024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:37.989927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:37.989994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:37.990009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:37.990015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:37.990021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:37.990035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:37.999986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.000056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.000074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.000081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.000087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.000100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.009976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.010046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.010062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.010069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.010075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.010088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.020018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.020085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.020100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.020107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.020112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.020126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.030050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.030162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.030177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.030184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.030190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.030204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.040048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.040159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.040181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.040188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.040194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.040211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.050056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.050125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.050140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.050147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.050153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.050167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.060062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.060167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.060189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.060196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.060202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.060216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.070103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.070170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.070185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.070192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.070198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.070211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.080150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.080220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.080238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.080244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.080251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.080264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.245 [2024-07-15 19:40:38.090169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.245 [2024-07-15 19:40:38.090246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.245 [2024-07-15 19:40:38.090264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.245 [2024-07-15 19:40:38.090271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.245 [2024-07-15 19:40:38.090276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.245 [2024-07-15 19:40:38.090290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.245 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.100214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.100285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.100301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.100307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.100313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.100327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.110208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.110277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.110292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.110299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.110305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.110318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.120234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.120303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.120319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.120325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.120331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.120344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.130288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.130349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.130365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.130371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.130380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.130394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.140266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.140377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.140393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.140400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.140406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.140421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.150328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.150395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.150412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.150418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.150424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.150439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.160391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.160461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.160476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.160483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.160489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.506 [2024-07-15 19:40:38.160503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.506 qpair failed and we were unable to recover it. 00:34:27.506 [2024-07-15 19:40:38.170357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.506 [2024-07-15 19:40:38.170423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.506 [2024-07-15 19:40:38.170439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.506 [2024-07-15 19:40:38.170446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.506 [2024-07-15 19:40:38.170452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.170466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.180454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.180522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.180538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.180544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.180550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.180564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.190431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.190519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.190534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.190541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.190547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.190561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.200479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.200543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.200558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.200565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.200571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.200585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.210492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.210566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.210582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.210589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.210595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.210608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.220504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.220571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.220586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.220592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.220602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.220616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.230581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.230653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.230669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.230676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.230682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.230696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.240611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.240681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.240696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.240702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.240708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.240723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.250662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.250731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.250748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.250754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.250760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.250775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.260648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.260711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.260725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.260732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.260738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.260752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.270648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.270722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.270737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.270743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.270749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.270763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.280665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.280732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.280748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.280754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.280760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.280774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.290761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.290873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.290889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.290895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.290902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.290916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.300700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.300769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.300784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.300791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.300797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.300811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.507 [2024-07-15 19:40:38.310841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.507 [2024-07-15 19:40:38.310908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.507 [2024-07-15 19:40:38.310924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.507 [2024-07-15 19:40:38.310930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.507 [2024-07-15 19:40:38.310939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.507 [2024-07-15 19:40:38.310954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.507 qpair failed and we were unable to recover it. 00:34:27.508 [2024-07-15 19:40:38.320845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.508 [2024-07-15 19:40:38.320955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.508 [2024-07-15 19:40:38.320971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.508 [2024-07-15 19:40:38.320978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.508 [2024-07-15 19:40:38.320984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.508 [2024-07-15 19:40:38.320999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.508 qpair failed and we were unable to recover it. 00:34:27.508 [2024-07-15 19:40:38.330868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.508 [2024-07-15 19:40:38.330933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.508 [2024-07-15 19:40:38.330949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.508 [2024-07-15 19:40:38.330955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.508 [2024-07-15 19:40:38.330961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.508 [2024-07-15 19:40:38.330975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.508 qpair failed and we were unable to recover it. 00:34:27.508 [2024-07-15 19:40:38.340907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.508 [2024-07-15 19:40:38.340973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.508 [2024-07-15 19:40:38.340989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.508 [2024-07-15 19:40:38.340995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.508 [2024-07-15 19:40:38.341001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.508 [2024-07-15 19:40:38.341014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.508 qpair failed and we were unable to recover it. 00:34:27.508 [2024-07-15 19:40:38.350917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.508 [2024-07-15 19:40:38.350981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.508 [2024-07-15 19:40:38.350996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.508 [2024-07-15 19:40:38.351003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.508 [2024-07-15 19:40:38.351009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.508 [2024-07-15 19:40:38.351023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.508 qpair failed and we were unable to recover it. 00:34:27.769 [2024-07-15 19:40:38.360974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.769 [2024-07-15 19:40:38.361041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.769 [2024-07-15 19:40:38.361056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.769 [2024-07-15 19:40:38.361062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.769 [2024-07-15 19:40:38.361068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.769 [2024-07-15 19:40:38.361082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.769 qpair failed and we were unable to recover it. 00:34:27.769 [2024-07-15 19:40:38.370978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.769 [2024-07-15 19:40:38.371072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.769 [2024-07-15 19:40:38.371087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.769 [2024-07-15 19:40:38.371093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.769 [2024-07-15 19:40:38.371099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.769 [2024-07-15 19:40:38.371113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.769 qpair failed and we were unable to recover it. 00:34:27.769 [2024-07-15 19:40:38.381019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.769 [2024-07-15 19:40:38.381080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.769 [2024-07-15 19:40:38.381095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.769 [2024-07-15 19:40:38.381102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.769 [2024-07-15 19:40:38.381108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.769 [2024-07-15 19:40:38.381121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.769 qpair failed and we were unable to recover it. 00:34:27.769 [2024-07-15 19:40:38.391049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.769 [2024-07-15 19:40:38.391126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.769 [2024-07-15 19:40:38.391142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.769 [2024-07-15 19:40:38.391148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.769 [2024-07-15 19:40:38.391154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.769 [2024-07-15 19:40:38.391168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.769 qpair failed and we were unable to recover it. 00:34:27.769 [2024-07-15 19:40:38.401058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.401127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.401143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.401153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.401159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.401173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.411126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.411195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.411211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.411218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.411229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.411243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.421131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.421207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.421222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.421233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.421240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.421254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.431154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.431222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.431241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.431248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.431254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.431269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.441185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.441342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.441360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.441366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.441374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.441388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.451195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.451265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.451281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.451288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.451294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.451308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.461237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.461303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.461321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.461328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.461335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.461349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.471261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.471330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.471345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.471353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.471359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.471373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.481290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.481358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.481373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.481380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.481387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.481402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.491304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.491372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.491388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.491398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.491404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.491419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.501367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.501432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.501448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.501455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.501462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.501477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.511389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.511455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.511471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.511478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.511484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.511498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.521410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.521482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.521497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.521504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.521511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.521525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.531407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.531470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.531485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.531492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.770 [2024-07-15 19:40:38.531498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.770 [2024-07-15 19:40:38.531512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.770 qpair failed and we were unable to recover it. 00:34:27.770 [2024-07-15 19:40:38.541388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.770 [2024-07-15 19:40:38.541452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.770 [2024-07-15 19:40:38.541468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.770 [2024-07-15 19:40:38.541474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.541480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.541495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.551513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.551582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.551598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.551605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.551611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.551625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.561502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.561566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.561582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.561589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.561596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.561610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.571463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.571535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.571550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.571557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.571563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.571577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.581494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.581556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.581574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.581582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.581588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.581602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.591594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.591664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.591679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.591686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.591693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.591708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.601551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.601621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.601638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.601645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.601651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.601666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:27.771 [2024-07-15 19:40:38.611627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.771 [2024-07-15 19:40:38.611690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.771 [2024-07-15 19:40:38.611706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.771 [2024-07-15 19:40:38.611714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.771 [2024-07-15 19:40:38.611720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:27.771 [2024-07-15 19:40:38.611734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.771 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.621674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.621749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.621765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.621772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.621779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.621794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.631688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.631753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.631768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.631775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.631781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.631797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.641677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.641745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.641760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.641767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.641773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.641787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.651810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.651895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.651911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.651918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.651925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.651939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.661825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.661896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.661912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.661920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.661926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.661940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.671829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.671899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.671918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.671926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.671932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.671946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.681847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.681916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.681931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.681938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.681944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.681958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.691887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.691962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.691977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.691985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.691991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.692005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.701913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.701994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.702009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.702017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.702023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.702037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.711880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.711942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.711957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.711964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.711971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.711988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.721980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.722050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.032 [2024-07-15 19:40:38.722065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.032 [2024-07-15 19:40:38.722072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.032 [2024-07-15 19:40:38.722078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.032 [2024-07-15 19:40:38.722093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.032 qpair failed and we were unable to recover it. 00:34:28.032 [2024-07-15 19:40:38.731977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.032 [2024-07-15 19:40:38.732039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.732054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.732062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.732068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.732081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.742048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.742164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.742181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.742188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.742194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.742209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.752056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.752120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.752137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.752144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.752150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.752164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.762087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.762159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.762177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.762185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.762191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.762205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.772104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.772173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.772188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.772195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.772201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.772215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.782135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.782204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.782219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.782229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.782236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.782251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.792126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.792194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.792210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.792217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.792223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.792241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.802202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.802275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.802291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.802299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.802305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.802323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.812214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.812301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.812320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.812327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.812334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.812349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.822255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.822372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.822390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.822398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.822404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.822420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.832266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.832336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.832351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.832359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.832365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.832379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.842275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.842344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.842359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.842366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.842374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.842389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.852368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.852487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.852508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.852515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.852522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.852537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.862374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.862458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.862474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.862481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.033 [2024-07-15 19:40:38.862487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.033 [2024-07-15 19:40:38.862502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.033 qpair failed and we were unable to recover it. 00:34:28.033 [2024-07-15 19:40:38.872481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.033 [2024-07-15 19:40:38.872547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.033 [2024-07-15 19:40:38.872563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.033 [2024-07-15 19:40:38.872570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.034 [2024-07-15 19:40:38.872576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.034 [2024-07-15 19:40:38.872591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.034 qpair failed and we were unable to recover it. 00:34:28.034 [2024-07-15 19:40:38.882351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.034 [2024-07-15 19:40:38.882420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.034 [2024-07-15 19:40:38.882435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.034 [2024-07-15 19:40:38.882443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.034 [2024-07-15 19:40:38.882449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.034 [2024-07-15 19:40:38.882463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.034 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.892444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.892509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.892524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.892532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.892541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.892555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.902481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.902544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.902559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.902566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.902572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.902586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.912531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.912596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.912611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.912618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.912625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.912639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.922577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.922648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.922663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.922670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.922676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.922690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.932543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.932621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.932636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.932644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.932650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.932664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.942615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.942683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.942699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.942706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.942712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.942726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.952572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.952637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.952653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.952660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.952667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.952682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.962680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.962755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.294 [2024-07-15 19:40:38.962770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.294 [2024-07-15 19:40:38.962777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.294 [2024-07-15 19:40:38.962783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.294 [2024-07-15 19:40:38.962797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.294 qpair failed and we were unable to recover it. 00:34:28.294 [2024-07-15 19:40:38.972705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.294 [2024-07-15 19:40:38.972770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:38.972786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:38.972793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:38.972799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:38.972813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:38.982691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:38.982768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:38.982783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:38.982790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:38.982800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:38.982814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:38.992716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:38.992799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:38.992814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:38.992821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:38.992827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:38.992841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.002781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.002845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.002860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.002867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.002874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.002888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.012824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.012899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.012914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.012922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.012928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.012942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.022884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.022959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.022975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.022982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.022988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.023003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.032882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.032951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.032967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.032974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.032980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.032994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.042871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.042938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.042954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.042961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.042967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.042981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.052925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.052987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.053004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.053010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.053017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.053032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.062991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.063059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.063076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.063083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.063090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.063105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.073006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.073070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.073085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.073092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.073101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.073116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.083016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.083084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.083099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.083106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.083112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.083126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.093065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.093133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.093149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.093156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.093162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.093176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.103093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.103206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.103227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.103234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.103240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.103256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.295 [2024-07-15 19:40:39.113036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.295 [2024-07-15 19:40:39.113098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.295 [2024-07-15 19:40:39.113113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.295 [2024-07-15 19:40:39.113120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.295 [2024-07-15 19:40:39.113127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.295 [2024-07-15 19:40:39.113141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.295 qpair failed and we were unable to recover it. 00:34:28.296 [2024-07-15 19:40:39.123180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.296 [2024-07-15 19:40:39.123287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.296 [2024-07-15 19:40:39.123303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.296 [2024-07-15 19:40:39.123310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.296 [2024-07-15 19:40:39.123318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.296 [2024-07-15 19:40:39.123333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.296 qpair failed and we were unable to recover it. 00:34:28.296 [2024-07-15 19:40:39.133088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.296 [2024-07-15 19:40:39.133155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.296 [2024-07-15 19:40:39.133171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.296 [2024-07-15 19:40:39.133179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.296 [2024-07-15 19:40:39.133185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.296 [2024-07-15 19:40:39.133199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.296 qpair failed and we were unable to recover it. 00:34:28.296 [2024-07-15 19:40:39.143221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.296 [2024-07-15 19:40:39.143286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.296 [2024-07-15 19:40:39.143302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.296 [2024-07-15 19:40:39.143308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.296 [2024-07-15 19:40:39.143314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.296 [2024-07-15 19:40:39.143329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.296 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.153239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.153306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.153323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.153330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.153337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.153352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.163229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.163297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.163315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.163326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.163332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.163347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.173285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.173356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.173372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.173379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.173385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.173399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.183311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.183378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.183394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.183401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.183407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.183421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.193330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.193395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.193410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.193417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.193424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.193438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.203381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.203456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.203472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.203479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.203486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.203500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.213380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.213443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.213459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.213466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.213472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.213486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.223482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.223548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.223565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.223572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.223579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.223593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.233405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.233500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.233516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.233522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.233528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.233542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.243438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.243506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.243521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.243529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.243535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.243549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.253454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.253560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.253576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.253587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.253594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.253609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.263559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.263628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.263643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.263651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.263658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.263672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.273544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.273609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.273625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.273632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.273638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.273652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.283631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.283699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.557 [2024-07-15 19:40:39.283714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.557 [2024-07-15 19:40:39.283722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.557 [2024-07-15 19:40:39.283728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.557 [2024-07-15 19:40:39.283742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.557 qpair failed and we were unable to recover it. 00:34:28.557 [2024-07-15 19:40:39.293631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.557 [2024-07-15 19:40:39.293695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.293710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.293717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.293725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.293739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.303600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.303678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.303694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.303701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.303707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.303721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.313699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.313762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.313776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.313784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.313790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.313804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.323725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.323795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.323810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.323817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.323824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.323838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.333766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.333837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.333852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.333859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.333866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.333879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.343712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.343778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.343793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.343803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.343809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.343824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.353841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.353911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.353929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.353936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.353943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.353958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.363884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.363960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.363975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.363982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.363988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.364002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.373885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.373955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.373972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.373979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.373985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.374000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.383912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.383978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.383993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.384000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.384006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.384020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.393929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.393992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.394008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.394015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.394021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.394035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.558 [2024-07-15 19:40:39.403992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.558 [2024-07-15 19:40:39.404055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.558 [2024-07-15 19:40:39.404071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.558 [2024-07-15 19:40:39.404078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.558 [2024-07-15 19:40:39.404084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.558 [2024-07-15 19:40:39.404098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.558 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.414011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.414083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.414098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.414107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.414113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.414128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.424041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.424113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.424129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.424136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.424142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.424156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.434071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.434141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.434238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.434246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.434253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.434268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.444072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.444138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.444155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.444162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.444169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.444184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.454129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.454198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.454214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.454221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.454231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.454246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.464152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.464218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.464238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.464245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.464251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.464266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.474184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.474254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.474272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.474279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.474285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.474303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.484251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.484354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.484369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.484376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.484382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.484396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.494222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.494303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.494318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.494325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.494331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.494345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.504302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.504414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.504431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.504438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.504445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.504460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.514302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.514387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.514402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.514409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.514416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.514430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.819 qpair failed and we were unable to recover it. 00:34:28.819 [2024-07-15 19:40:39.524330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.819 [2024-07-15 19:40:39.524401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.819 [2024-07-15 19:40:39.524425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.819 [2024-07-15 19:40:39.524433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.819 [2024-07-15 19:40:39.524439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.819 [2024-07-15 19:40:39.524455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.534345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.534409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.534428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.534435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.534442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.534458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.544380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.544491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.544510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.544518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.544526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.544542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.554469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.554534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.554553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.554560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.554567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.554582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.564379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.564446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.564463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.564469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.564476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.564494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.574471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.574544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.574560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.574568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.574575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.574589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.584433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.584503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.584521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.584528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.584535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.584549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.594447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.594514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.594531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.594539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.594545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.594559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.604565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.604630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.604646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.604654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.604660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.604675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.614584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.614656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.614675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.614682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.614689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.614703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.624601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.624665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.624680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.624687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.624693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.624708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.634581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.634647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.634664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.634671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.634677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.634691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.644714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.644780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.644796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.644803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.644809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.644823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.654713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.654794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.654811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.654819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.654825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.654844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:28.820 [2024-07-15 19:40:39.664661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.820 [2024-07-15 19:40:39.664737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.820 [2024-07-15 19:40:39.664753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.820 [2024-07-15 19:40:39.664760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.820 [2024-07-15 19:40:39.664766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:28.820 [2024-07-15 19:40:39.664780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.820 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.674725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.674846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.674864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.674871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.674878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.674894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.684789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.684864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.684880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.684888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.684894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.684909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.694718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.694787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.694802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.694810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.694817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.694831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.704852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.704933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.704952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.704960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.704966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.704980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.714907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.714969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.714985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.714992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.714999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.715013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.724932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.724997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.725012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.725020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.725026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.725040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.734914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.734988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.735005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.735013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.735019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.735033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.744863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.744932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.744949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.744957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.744967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.744982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.754978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.755041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.755057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.755065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.755071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.755086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.764937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.765003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.765018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.765025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.765031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.765047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.775065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.775137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.775153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.775160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.775167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.775182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.784999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.785067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.785084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.785091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.785098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.785112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.081 qpair failed and we were unable to recover it. 00:34:29.081 [2024-07-15 19:40:39.795097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.081 [2024-07-15 19:40:39.795171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.081 [2024-07-15 19:40:39.795188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.081 [2024-07-15 19:40:39.795195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.081 [2024-07-15 19:40:39.795202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.081 [2024-07-15 19:40:39.795216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.805132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.805202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.805221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.805234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.805241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.805256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.815115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.815184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.815199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.815206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.815213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.815231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.825109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.825185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.825203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.825211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.825218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.825237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.835123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.835190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.835207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.835214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.835232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.835247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.845254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.845320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.845336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.845343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.845349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.845363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.855358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.855474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.855491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.855498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.855504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.855520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.865287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.865353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.865369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.865377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.865383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.865397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.875267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.875334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.875351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.875359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.875365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.875380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.885317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.885436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.885454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.885461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.885467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.885483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.895381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.895454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.895471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.895477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.895484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.895498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.905416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.905480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.905496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.905503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.905509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.905523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.915367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.915435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.915451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.915459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.915465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.915479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.082 [2024-07-15 19:40:39.925403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.082 [2024-07-15 19:40:39.925472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.082 [2024-07-15 19:40:39.925487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.082 [2024-07-15 19:40:39.925499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.082 [2024-07-15 19:40:39.925505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.082 [2024-07-15 19:40:39.925519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.082 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-15 19:40:39.935519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.935599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.935615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.935622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.935628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.935642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.945439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.945505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.945520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.945527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.945533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.945546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.955543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.955609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.955626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.955633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.955639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.955654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.965543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.965613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.965628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.965635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.965641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.965655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.975615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.975684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.975700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.975707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.975713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.975727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.985681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.985764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.985779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.985786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.985792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.985806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:39.995625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:39.995691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:39.995706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:39.995713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:39.995719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:39.995733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.005706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.005772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.005789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.005796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.005803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.005818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.015779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.015855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.015874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.015885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.015891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.015907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.025696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.025768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.025784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.025791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.025798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.025812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.035726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.035793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.035809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.035816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.035822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.035836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.045850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.045921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.045937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.045945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.045951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.045966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.055863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.055931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.055948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.055955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.055961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.055976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.065963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.066057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.066078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.066086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.066093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.343 [2024-07-15 19:40:40.066110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-15 19:40:40.075845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.343 [2024-07-15 19:40:40.075911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.343 [2024-07-15 19:40:40.075928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.343 [2024-07-15 19:40:40.075935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.343 [2024-07-15 19:40:40.075941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.075956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.085981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.086053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.086070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.086077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.086084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.086099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.095962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.096033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.096050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.096057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.096063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.096079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.105976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.106043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.106059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.106070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.106076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.106090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.116033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.116096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.116112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.116119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.116125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.116139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.126093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.126207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.126229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.126237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.126244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.126258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.136069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.136132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.136148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.136155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.136162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.136176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.146087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.146154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.146170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.146177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.146184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.146197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.156133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.156199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.156215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.156223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.156233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.156249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.166180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.166253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.166269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.166276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.166283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.166297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.176188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.176258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.176274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.176282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.176298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.176312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-15 19:40:40.186260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.344 [2024-07-15 19:40:40.186324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.344 [2024-07-15 19:40:40.186340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.344 [2024-07-15 19:40:40.186347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.344 [2024-07-15 19:40:40.186353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.344 [2024-07-15 19:40:40.186368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.196252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.196331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.196351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.196358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.196364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.196378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.206317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.206384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.206400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.206408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.206414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.206428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.216303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.216368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.216384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.216391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.216397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.216411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.226265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.226331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.226347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.226354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.226361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.226374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.236372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.236440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.236456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.236464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.236470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.236488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.246408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.246473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.246489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.246496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.246502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.246516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.256357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.256425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.256442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.256450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.256456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.256470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.266442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.266506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.266523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.266530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.266536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.605 [2024-07-15 19:40:40.266550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.605 qpair failed and we were unable to recover it. 00:34:29.605 [2024-07-15 19:40:40.276483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.605 [2024-07-15 19:40:40.276587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.605 [2024-07-15 19:40:40.276603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.605 [2024-07-15 19:40:40.276610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.605 [2024-07-15 19:40:40.276617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.276632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.286548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.286616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.286637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.286644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.286650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.286665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.296539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.296612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.296627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.296634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.296641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.296655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.306575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.306641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.306658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.306665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.306672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.306686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.316603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.316714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.316731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.316739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.316748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.316766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.326640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.326708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.326724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.326731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.326738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.326756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.336649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.336718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.336734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.336742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.336748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.336762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.346697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.346802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.346818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.346826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.346832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.346848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.356713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.356779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.356796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.356804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.356810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.356825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.366707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.366775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.366791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.366798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.366804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.366819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.376785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.376856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.376875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.376882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.376889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.376902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.386770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.386836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.386851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.386858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.386864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.386879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.396846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.396922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.396938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.396945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.396952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.396966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.406881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.406947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.406963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.406971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.406977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.406991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.416897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.416971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.416988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.606 [2024-07-15 19:40:40.416995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.606 [2024-07-15 19:40:40.417002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.606 [2024-07-15 19:40:40.417020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.606 qpair failed and we were unable to recover it. 00:34:29.606 [2024-07-15 19:40:40.426944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.606 [2024-07-15 19:40:40.427012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.606 [2024-07-15 19:40:40.427028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.607 [2024-07-15 19:40:40.427035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.607 [2024-07-15 19:40:40.427041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.607 [2024-07-15 19:40:40.427056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.607 qpair failed and we were unable to recover it. 00:34:29.607 [2024-07-15 19:40:40.436956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.607 [2024-07-15 19:40:40.437062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.607 [2024-07-15 19:40:40.437079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.607 [2024-07-15 19:40:40.437087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.607 [2024-07-15 19:40:40.437093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.607 [2024-07-15 19:40:40.437108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.607 qpair failed and we were unable to recover it. 00:34:29.607 [2024-07-15 19:40:40.446988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.607 [2024-07-15 19:40:40.447069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.607 [2024-07-15 19:40:40.447085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.607 [2024-07-15 19:40:40.447092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.607 [2024-07-15 19:40:40.447098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.607 [2024-07-15 19:40:40.447113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.607 qpair failed and we were unable to recover it. 00:34:29.607 [2024-07-15 19:40:40.457029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.607 [2024-07-15 19:40:40.457093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.607 [2024-07-15 19:40:40.457109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.607 [2024-07-15 19:40:40.457116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.607 [2024-07-15 19:40:40.457123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.607 [2024-07-15 19:40:40.457137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.607 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.467065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.467128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.467147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.467154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.467161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.467175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.477062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.477131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.477148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.477155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.477162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.477177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.487159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.487230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.487246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.487253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.487259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.487273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.497179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.497261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.497276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.497283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.497289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.497303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.507107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.507174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.507189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.507196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.507206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.507220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.517202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.867 [2024-07-15 19:40:40.517277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.867 [2024-07-15 19:40:40.517293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.867 [2024-07-15 19:40:40.517300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.867 [2024-07-15 19:40:40.517306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.867 [2024-07-15 19:40:40.517320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.867 qpair failed and we were unable to recover it. 00:34:29.867 [2024-07-15 19:40:40.527255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.527323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.527339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.527346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.527352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.527367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.537271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.537340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.537355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.537363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.537369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.537383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.547287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.547360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.547375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.547383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.547389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.547403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.557327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.557396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.557412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.557420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.557426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.557440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.567373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.567489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.567506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.567513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.567520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.567534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.577380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.577451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.577467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.577474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.577480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.577496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.587415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.587481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.587496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.587503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.587509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.587524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.597455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.597525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.597540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.597547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.597561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.597575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.607484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.607553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.607569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.607577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.607583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.607597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.617491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.617556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.617572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.617579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.617585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.617599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.627521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.627587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.627603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.627610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.627616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.627630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.637474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.637541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.637557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.868 [2024-07-15 19:40:40.637564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.868 [2024-07-15 19:40:40.637571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.868 [2024-07-15 19:40:40.637585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.868 qpair failed and we were unable to recover it. 00:34:29.868 [2024-07-15 19:40:40.647584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.868 [2024-07-15 19:40:40.647658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.868 [2024-07-15 19:40:40.647673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.647680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.647686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.647700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.657527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.657600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.657616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.657623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.657630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.657644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.667609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.667678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.667696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.667705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.667712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.667728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.677589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.677654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.677669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.677676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.677682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.677697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.687709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.687776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.687792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.687803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.687809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.687824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.697723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.697795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.697811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.697818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.697824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.697839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.707726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.707794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.707809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.707816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.707822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.707836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:29.869 [2024-07-15 19:40:40.717843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.869 [2024-07-15 19:40:40.717910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.869 [2024-07-15 19:40:40.717926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.869 [2024-07-15 19:40:40.717934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.869 [2024-07-15 19:40:40.717940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:29.869 [2024-07-15 19:40:40.717954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.869 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.727809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.727880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.727896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.727903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.727909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.727924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.737755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.737825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.737842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.737849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.737855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.737870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.747850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.747920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.747936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.747942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.747948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.747962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.757880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.757947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.757964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.757971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.757977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.757991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.767915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.767987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.768003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.768010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.768016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.768030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.777949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.778018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.778034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.778044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.778051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.778065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.787964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.788027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.788043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.788051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.788057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.130 [2024-07-15 19:40:40.788071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.130 qpair failed and we were unable to recover it. 00:34:30.130 [2024-07-15 19:40:40.797979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.130 [2024-07-15 19:40:40.798047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.130 [2024-07-15 19:40:40.798063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.130 [2024-07-15 19:40:40.798071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.130 [2024-07-15 19:40:40.798077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.798091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.808040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.808110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.808128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.808136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.808142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.808158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.818056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.818172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.818190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.818197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.818204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.818219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.828093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.828172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.828188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.828195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.828202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.828215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.838111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.838228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.838245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.838252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.838258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.838274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.848146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.848213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.848233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.848241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.848248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.848263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.858171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.858244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.858261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.858269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.858276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.858291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.868190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.868257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.868273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.868283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.868290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.868304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.878240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.878312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.878328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.878335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.878341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.878356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.888277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.888344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.888358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.888365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.888372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.888386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.898289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.898359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.898374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.898381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.898387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.898402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.908358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.908425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.908441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.908448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.908454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.908469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.918281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.918351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.918367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.918374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.918380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.918395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.928395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.928462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.928479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.928486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.928492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.928506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.938404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.938471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.131 [2024-07-15 19:40:40.938487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.131 [2024-07-15 19:40:40.938494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.131 [2024-07-15 19:40:40.938500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.131 [2024-07-15 19:40:40.938515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.131 qpair failed and we were unable to recover it. 00:34:30.131 [2024-07-15 19:40:40.948450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.131 [2024-07-15 19:40:40.948546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.132 [2024-07-15 19:40:40.948563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.132 [2024-07-15 19:40:40.948570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.132 [2024-07-15 19:40:40.948577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.132 [2024-07-15 19:40:40.948592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.132 qpair failed and we were unable to recover it. 00:34:30.132 [2024-07-15 19:40:40.958485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.132 [2024-07-15 19:40:40.958568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.132 [2024-07-15 19:40:40.958587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.132 [2024-07-15 19:40:40.958595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.132 [2024-07-15 19:40:40.958601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.132 [2024-07-15 19:40:40.958616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.132 qpair failed and we were unable to recover it. 00:34:30.132 [2024-07-15 19:40:40.968505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.132 [2024-07-15 19:40:40.968575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.132 [2024-07-15 19:40:40.968591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.132 [2024-07-15 19:40:40.968598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.132 [2024-07-15 19:40:40.968605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.132 [2024-07-15 19:40:40.968619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.132 qpair failed and we were unable to recover it. 00:34:30.132 [2024-07-15 19:40:40.978512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.132 [2024-07-15 19:40:40.978579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.132 [2024-07-15 19:40:40.978595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.132 [2024-07-15 19:40:40.978602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.132 [2024-07-15 19:40:40.978609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.132 [2024-07-15 19:40:40.978624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.132 qpair failed and we were unable to recover it. 00:34:30.392 [2024-07-15 19:40:40.988569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.392 [2024-07-15 19:40:40.988641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.392 [2024-07-15 19:40:40.988657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.392 [2024-07-15 19:40:40.988665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.392 [2024-07-15 19:40:40.988671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.392 [2024-07-15 19:40:40.988686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 [2024-07-15 19:40:40.998592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.392 [2024-07-15 19:40:40.998711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.392 [2024-07-15 19:40:40.998728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.392 [2024-07-15 19:40:40.998736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.392 [2024-07-15 19:40:40.998742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c3ef90 00:34:30.392 [2024-07-15 19:40:40.998757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 [2024-07-15 19:40:40.998848] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:30.392 A controller has encountered a failure and is being reset. 00:34:30.392 [2024-07-15 19:40:40.998947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4cf30 (9): Bad file descriptor 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 qpair failed and we were unable to recover it. 00:34:30.392 Controller properly reset. 00:34:30.392 Initializing NVMe Controllers 00:34:30.392 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:30.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:30.392 Initialization complete. Launching workers. 00:34:30.392 Starting thread on core 1 00:34:30.392 Starting thread on core 2 00:34:30.392 Starting thread on core 3 00:34:30.392 Starting thread on core 0 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:30.392 00:34:30.392 real 0m10.683s 00:34:30.392 user 0m19.100s 00:34:30.392 sys 0m4.376s 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.392 ************************************ 00:34:30.392 END TEST nvmf_target_disconnect_tc2 00:34:30.392 ************************************ 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:30.392 rmmod nvme_tcp 00:34:30.392 rmmod nvme_fabrics 00:34:30.392 rmmod nvme_keyring 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1843910 ']' 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1843910 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1843910 ']' 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1843910 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:30.392 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1843910 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1843910' 00:34:30.652 killing process with pid 1843910 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1843910 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1843910 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.652 19:40:41 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.210 19:40:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:33.210 00:34:33.210 real 0m18.792s 00:34:33.210 user 0m46.392s 00:34:33.210 sys 0m8.812s 00:34:33.210 19:40:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:33.210 19:40:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 ************************************ 00:34:33.210 END TEST nvmf_target_disconnect 00:34:33.210 ************************************ 00:34:33.210 19:40:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:33.210 19:40:43 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:33.210 19:40:43 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:33.210 19:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 19:40:43 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:33.210 00:34:33.210 real 28m9.252s 00:34:33.210 user 72m31.166s 00:34:33.210 sys 7m27.088s 00:34:33.210 19:40:43 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:33.210 19:40:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 ************************************ 00:34:33.210 END TEST nvmf_tcp 00:34:33.210 ************************************ 00:34:33.210 19:40:43 -- common/autotest_common.sh@1142 -- # return 0 00:34:33.210 19:40:43 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:33.210 19:40:43 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:33.210 19:40:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:33.210 19:40:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:33.210 19:40:43 -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 ************************************ 00:34:33.210 START TEST spdkcli_nvmf_tcp 00:34:33.210 ************************************ 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:33.210 * Looking for test storage... 00:34:33.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1845455 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1845455 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1845455 ']' 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:33.210 19:40:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 [2024-07-15 19:40:43.842730] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:34:33.210 [2024-07-15 19:40:43.842781] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845455 ] 00:34:33.210 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.210 [2024-07-15 19:40:43.868334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:33.210 [2024-07-15 19:40:43.896894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:33.210 [2024-07-15 19:40:43.938943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.210 [2024-07-15 19:40:43.938946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.210 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:33.210 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:33.210 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:33.210 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:33.210 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:33.210 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:33.210 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:33.210 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:33.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:33.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:33.210 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:33.210 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:33.211 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:33.211 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:33.211 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:33.211 ' 00:34:35.841 [2024-07-15 19:40:46.443413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.778 [2024-07-15 19:40:47.619438] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:39.314 [2024-07-15 19:40:49.782013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:41.214 [2024-07-15 19:40:51.643876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:42.594 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:42.594 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:42.594 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.594 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.594 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:42.594 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:42.594 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:42.594 19:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.854 19:40:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:42.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:42.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:42.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:42.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:42.854 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:42.854 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:42.854 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:42.854 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:42.854 ' 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:48.122 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:48.122 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:48.122 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:48.122 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1845455 ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1845455' 00:34:48.122 killing process with pid 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1845455 ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1845455 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1845455 ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1845455 00:34:48.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1845455) - No such process 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1845455 is not found' 00:34:48.122 Process with pid 1845455 is not found 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:48.122 00:34:48.122 real 0m15.171s 00:34:48.122 user 0m31.424s 00:34:48.122 sys 0m0.638s 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:48.122 19:40:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.122 ************************************ 00:34:48.122 END TEST spdkcli_nvmf_tcp 00:34:48.122 ************************************ 00:34:48.122 19:40:58 -- common/autotest_common.sh@1142 -- # return 0 00:34:48.122 19:40:58 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.122 19:40:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:48.122 19:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:48.122 19:40:58 -- common/autotest_common.sh@10 -- # set +x 00:34:48.122 ************************************ 00:34:48.122 START TEST nvmf_identify_passthru 00:34:48.122 ************************************ 00:34:48.122 19:40:58 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.382 * Looking for test storage... 00:34:48.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.382 19:40:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.382 19:40:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:48.382 19:40:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.382 19:40:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.382 19:40:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.382 19:40:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.382 19:40:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.382 19:40:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:48.382 19:40:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:48.382 19:40:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:53.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:53.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:53.661 Found net devices under 0000:86:00.0: cvl_0_0 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:53.661 Found net devices under 0000:86:00.1: cvl_0_1 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:53.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:34:53.661 00:34:53.661 --- 10.0.0.2 ping statistics --- 00:34:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.661 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:34:53.661 00:34:53.661 --- 10.0.0.1 ping statistics --- 00:34:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.661 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.661 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:53.662 19:41:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:34:53.662 19:41:04 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:53.662 19:41:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:53.662 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.857 19:41:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:34:57.857 19:41:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:57.857 19:41:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:57.857 19:41:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:57.857 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1852240 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1852240 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1852240 ']' 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.050 [2024-07-15 19:41:12.755752] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:35:02.050 [2024-07-15 19:41:12.755800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.050 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.050 [2024-07-15 19:41:12.785504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:02.050 [2024-07-15 19:41:12.812486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.050 [2024-07-15 19:41:12.854085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.050 [2024-07-15 19:41:12.854125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.050 [2024-07-15 19:41:12.854132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.050 [2024-07-15 19:41:12.854141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.050 [2024-07-15 19:41:12.854146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.050 [2024-07-15 19:41:12.854182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.050 [2024-07-15 19:41:12.854281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.050 [2024-07-15 19:41:12.854317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.050 [2024-07-15 19:41:12.854318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.050 INFO: Log level set to 20 00:35:02.050 INFO: Requests: 00:35:02.050 { 00:35:02.050 "jsonrpc": "2.0", 00:35:02.050 "method": "nvmf_set_config", 00:35:02.050 "id": 1, 00:35:02.050 "params": { 00:35:02.050 "admin_cmd_passthru": { 00:35:02.050 "identify_ctrlr": true 00:35:02.050 } 00:35:02.050 } 00:35:02.050 } 00:35:02.050 00:35:02.050 INFO: response: 00:35:02.050 { 00:35:02.050 "jsonrpc": "2.0", 00:35:02.050 "id": 1, 00:35:02.050 "result": true 00:35:02.050 } 00:35:02.050 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.050 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.050 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.050 INFO: Setting log level to 20 00:35:02.050 INFO: Setting log level to 20 00:35:02.050 INFO: Log level set to 20 00:35:02.050 INFO: Log level set to 20 00:35:02.050 INFO: Requests: 00:35:02.050 { 00:35:02.050 "jsonrpc": "2.0", 00:35:02.050 "method": "framework_start_init", 00:35:02.050 "id": 1 00:35:02.050 } 00:35:02.050 00:35:02.050 INFO: Requests: 00:35:02.050 { 00:35:02.050 "jsonrpc": "2.0", 00:35:02.050 "method": "framework_start_init", 00:35:02.050 "id": 1 00:35:02.050 } 00:35:02.050 00:35:02.308 [2024-07-15 19:41:12.976093] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:02.308 INFO: response: 00:35:02.308 { 00:35:02.308 "jsonrpc": "2.0", 00:35:02.308 "id": 1, 00:35:02.308 "result": true 00:35:02.308 } 00:35:02.308 00:35:02.308 INFO: response: 00:35:02.308 { 00:35:02.308 "jsonrpc": "2.0", 00:35:02.308 "id": 1, 00:35:02.308 "result": true 00:35:02.308 } 00:35:02.308 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.308 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.308 INFO: Setting log level to 40 00:35:02.308 INFO: Setting log level to 40 00:35:02.308 INFO: Setting log level to 40 00:35:02.308 [2024-07-15 19:41:12.989461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.308 19:41:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.308 19:41:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.308 19:41:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:35:02.308 19:41:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.308 19:41:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 Nvme0n1 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 [2024-07-15 19:41:15.880190] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 [ 00:35:05.595 { 00:35:05.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:05.595 "subtype": "Discovery", 00:35:05.595 "listen_addresses": [], 00:35:05.595 "allow_any_host": true, 00:35:05.595 "hosts": [] 00:35:05.595 }, 00:35:05.595 { 00:35:05.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.595 "subtype": "NVMe", 00:35:05.595 "listen_addresses": [ 00:35:05.595 { 00:35:05.595 "trtype": "TCP", 00:35:05.595 "adrfam": "IPv4", 00:35:05.595 "traddr": "10.0.0.2", 00:35:05.595 "trsvcid": "4420" 00:35:05.595 } 00:35:05.595 ], 00:35:05.595 "allow_any_host": true, 00:35:05.595 "hosts": [], 00:35:05.595 "serial_number": "SPDK00000000000001", 00:35:05.595 "model_number": "SPDK bdev Controller", 00:35:05.595 "max_namespaces": 1, 00:35:05.595 "min_cntlid": 1, 00:35:05.595 "max_cntlid": 65519, 00:35:05.595 "namespaces": [ 00:35:05.595 { 00:35:05.595 "nsid": 1, 00:35:05.595 "bdev_name": "Nvme0n1", 00:35:05.595 "name": "Nvme0n1", 00:35:05.595 "nguid": "F777C03E682A4698AB5D44D36FAE4A88", 00:35:05.595 "uuid": "f777c03e-682a-4698-ab5d-44d36fae4a88" 00:35:05.595 } 00:35:05.595 ] 00:35:05.595 } 00:35:05.595 ] 00:35:05.595 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:05.595 19:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:05.595 19:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:05.595 rmmod nvme_tcp 00:35:05.595 rmmod nvme_fabrics 00:35:05.595 rmmod nvme_keyring 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1852240 ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1852240 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1852240 ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1852240 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1852240 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1852240' 00:35:05.595 killing process with pid 1852240 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1852240 00:35:05.595 19:41:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1852240 00:35:07.017 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:07.017 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:07.018 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:07.018 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:07.018 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:07.018 19:41:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.018 19:41:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.018 19:41:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.553 19:41:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:09.553 00:35:09.554 real 0m21.027s 00:35:09.554 user 0m27.246s 00:35:09.554 sys 0m4.756s 00:35:09.554 19:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:09.554 19:41:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.554 ************************************ 00:35:09.554 END TEST nvmf_identify_passthru 00:35:09.554 ************************************ 00:35:09.554 19:41:19 -- common/autotest_common.sh@1142 -- # return 0 00:35:09.554 19:41:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:09.554 19:41:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:09.554 19:41:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:09.554 19:41:19 -- common/autotest_common.sh@10 -- # set +x 00:35:09.554 ************************************ 00:35:09.554 START TEST nvmf_dif 00:35:09.554 ************************************ 00:35:09.554 19:41:19 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:09.554 * Looking for test storage... 00:35:09.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.554 19:41:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.554 19:41:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.554 19:41:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.554 19:41:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.554 19:41:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.554 19:41:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.554 19:41:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:09.554 19:41:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:09.554 19:41:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.554 19:41:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:09.554 19:41:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:09.554 19:41:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:09.554 19:41:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.832 19:41:25 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.832 19:41:25 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:14.832 19:41:25 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:14.832 19:41:25 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:14.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:14.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:14.833 Found net devices under 0000:86:00.0: cvl_0_0 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:14.833 Found net devices under 0000:86:00.1: cvl_0_1 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:14.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:35:14.833 00:35:14.833 --- 10.0.0.2 ping statistics --- 00:35:14.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.833 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:35:14.833 00:35:14.833 --- 10.0.0.1 ping statistics --- 00:35:14.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.833 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:14.833 19:41:25 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:17.372 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:17.372 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:17.372 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:17.372 19:41:27 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:17.372 19:41:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:17.372 19:41:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:17.372 19:41:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.372 19:41:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1857688 00:35:17.372 19:41:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1857688 00:35:17.372 19:41:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1857688 ']' 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:17.372 19:41:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.372 [2024-07-15 19:41:28.074219] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:35:17.372 [2024-07-15 19:41:28.074265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.372 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.372 [2024-07-15 19:41:28.103352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:17.372 [2024-07-15 19:41:28.130200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.372 [2024-07-15 19:41:28.170605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.372 [2024-07-15 19:41:28.170644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.372 [2024-07-15 19:41:28.170651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.372 [2024-07-15 19:41:28.170660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.372 [2024-07-15 19:41:28.170665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.372 [2024-07-15 19:41:28.170682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:17.633 19:41:28 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 19:41:28 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.633 19:41:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:17.633 19:41:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 [2024-07-15 19:41:28.299625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.633 19:41:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 ************************************ 00:35:17.633 START TEST fio_dif_1_default 00:35:17.633 ************************************ 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 bdev_null0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:17.633 [2024-07-15 19:41:28.367904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.633 { 00:35:17.633 "params": { 00:35:17.633 "name": "Nvme$subsystem", 00:35:17.633 "trtype": "$TEST_TRANSPORT", 00:35:17.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.633 "adrfam": "ipv4", 00:35:17.633 "trsvcid": "$NVMF_PORT", 00:35:17.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.633 "hdgst": ${hdgst:-false}, 00:35:17.633 "ddgst": ${ddgst:-false} 00:35:17.633 }, 00:35:17.633 "method": "bdev_nvme_attach_controller" 00:35:17.633 } 00:35:17.633 EOF 00:35:17.633 )") 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:17.633 "params": { 00:35:17.633 "name": "Nvme0", 00:35:17.633 "trtype": "tcp", 00:35:17.633 "traddr": "10.0.0.2", 00:35:17.633 "adrfam": "ipv4", 00:35:17.633 "trsvcid": "4420", 00:35:17.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.633 "hdgst": false, 00:35:17.633 "ddgst": false 00:35:17.633 }, 00:35:17.633 "method": "bdev_nvme_attach_controller" 00:35:17.633 }' 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:17.633 19:41:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.892 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:17.892 fio-3.35 00:35:17.892 Starting 1 thread 00:35:18.151 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.426 00:35:30.426 filename0: (groupid=0, jobs=1): err= 0: pid=1857956: Mon Jul 15 19:41:39 2024 00:35:30.426 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10024msec) 00:35:30.426 slat (nsec): min=6074, max=62876, avg=6346.39, stdev=1512.85 00:35:30.426 clat (usec): min=556, max=44675, avg=21084.99, stdev=20456.00 00:35:30.426 lat (usec): min=562, max=44706, avg=21091.33, stdev=20455.92 00:35:30.426 clat percentiles (usec): 00:35:30.426 | 1.00th=[ 570], 5.00th=[ 578], 10.00th=[ 578], 20.00th=[ 586], 00:35:30.426 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[41157], 60.00th=[41157], 00:35:30.426 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:35:30.426 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:30.426 | 99.99th=[44827] 00:35:30.426 bw ( KiB/s): min= 704, max= 768, per=99.98%, avg=758.40, stdev=21.02, samples=20 00:35:30.426 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:35:30.426 lat (usec) : 750=49.89% 00:35:30.426 lat (msec) : 50=50.11% 00:35:30.426 cpu : usr=95.04%, sys=4.69%, ctx=18, majf=0, minf=198 00:35:30.426 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.426 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.426 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:30.426 00:35:30.426 Run status group 0 (all jobs): 00:35:30.426 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10024-10024msec 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 00:35:30.426 real 0m11.135s 00:35:30.426 user 0m16.393s 00:35:30.426 sys 0m0.767s 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 ************************************ 00:35:30.426 END TEST fio_dif_1_default 00:35:30.426 ************************************ 00:35:30.426 19:41:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:30.426 19:41:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:30.426 19:41:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:30.426 19:41:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 ************************************ 00:35:30.426 START TEST fio_dif_1_multi_subsystems 00:35:30.426 ************************************ 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 bdev_null0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 [2024-07-15 19:41:39.571188] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 bdev_null1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.426 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.426 { 00:35:30.426 "params": { 00:35:30.426 "name": "Nvme$subsystem", 00:35:30.426 "trtype": "$TEST_TRANSPORT", 00:35:30.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.426 "adrfam": "ipv4", 00:35:30.426 "trsvcid": "$NVMF_PORT", 00:35:30.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.426 "hdgst": ${hdgst:-false}, 00:35:30.426 "ddgst": ${ddgst:-false} 00:35:30.426 }, 00:35:30.426 "method": "bdev_nvme_attach_controller" 00:35:30.427 } 00:35:30.427 EOF 00:35:30.427 )") 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.427 { 00:35:30.427 "params": { 00:35:30.427 "name": "Nvme$subsystem", 00:35:30.427 "trtype": "$TEST_TRANSPORT", 00:35:30.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.427 "adrfam": "ipv4", 00:35:30.427 "trsvcid": "$NVMF_PORT", 00:35:30.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.427 "hdgst": ${hdgst:-false}, 00:35:30.427 "ddgst": ${ddgst:-false} 00:35:30.427 }, 00:35:30.427 "method": "bdev_nvme_attach_controller" 00:35:30.427 } 00:35:30.427 EOF 00:35:30.427 )") 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.427 "params": { 00:35:30.427 "name": "Nvme0", 00:35:30.427 "trtype": "tcp", 00:35:30.427 "traddr": "10.0.0.2", 00:35:30.427 "adrfam": "ipv4", 00:35:30.427 "trsvcid": "4420", 00:35:30.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.427 "hdgst": false, 00:35:30.427 "ddgst": false 00:35:30.427 }, 00:35:30.427 "method": "bdev_nvme_attach_controller" 00:35:30.427 },{ 00:35:30.427 "params": { 00:35:30.427 "name": "Nvme1", 00:35:30.427 "trtype": "tcp", 00:35:30.427 "traddr": "10.0.0.2", 00:35:30.427 "adrfam": "ipv4", 00:35:30.427 "trsvcid": "4420", 00:35:30.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.427 "hdgst": false, 00:35:30.427 "ddgst": false 00:35:30.427 }, 00:35:30.427 "method": "bdev_nvme_attach_controller" 00:35:30.427 }' 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.427 19:41:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.427 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:30.427 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:30.427 fio-3.35 00:35:30.427 Starting 2 threads 00:35:30.427 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.410 00:35:40.410 filename0: (groupid=0, jobs=1): err= 0: pid=1859977: Mon Jul 15 19:41:50 2024 00:35:40.410 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:35:40.410 slat (nsec): min=6015, max=28312, avg=7783.27, stdev=2554.75 00:35:40.410 clat (usec): min=40846, max=42110, avg=40993.93, stdev=136.20 00:35:40.410 lat (usec): min=40852, max=42138, avg=41001.72, stdev=136.67 00:35:40.410 clat percentiles (usec): 00:35:40.410 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:40.410 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:40.410 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:40.410 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:40.410 | 99.99th=[42206] 00:35:40.410 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:40.410 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:40.410 lat (msec) : 50=100.00% 00:35:40.410 cpu : usr=97.50%, sys=2.23%, ctx=11, majf=0, minf=55 00:35:40.410 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.410 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.410 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:40.410 filename1: (groupid=0, jobs=1): err= 0: pid=1859978: Mon Jul 15 19:41:50 2024 00:35:40.410 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:35:40.410 slat (nsec): min=6035, max=57114, avg=7882.37, stdev=3277.23 00:35:40.410 clat (usec): min=40754, max=42192, avg=40989.61, stdev=128.27 00:35:40.410 lat (usec): min=40760, max=42224, avg=40997.49, stdev=128.85 00:35:40.410 clat percentiles (usec): 00:35:40.410 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:40.410 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:40.410 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:40.410 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:40.410 | 99.99th=[42206] 00:35:40.410 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:40.410 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:40.410 lat (msec) : 50=100.00% 00:35:40.410 cpu : usr=97.53%, sys=2.20%, ctx=10, majf=0, minf=200 00:35:40.410 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:40.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:40.410 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:40.410 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:40.410 00:35:40.410 Run status group 0 (all jobs): 00:35:40.410 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10008-10009msec 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.410 00:35:40.410 real 0m11.390s 00:35:40.410 user 0m26.624s 00:35:40.410 sys 0m0.812s 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 ************************************ 00:35:40.410 END TEST fio_dif_1_multi_subsystems 00:35:40.410 ************************************ 00:35:40.410 19:41:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:40.410 19:41:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:40.410 19:41:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:40.410 19:41:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.410 19:41:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.410 ************************************ 00:35:40.410 START TEST fio_dif_rand_params 00:35:40.410 ************************************ 00:35:40.410 19:41:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:40.411 19:41:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.411 bdev_null0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:40.411 [2024-07-15 19:41:51.029276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:40.411 { 00:35:40.411 "params": { 00:35:40.411 "name": "Nvme$subsystem", 00:35:40.411 "trtype": "$TEST_TRANSPORT", 00:35:40.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.411 "adrfam": "ipv4", 00:35:40.411 "trsvcid": "$NVMF_PORT", 00:35:40.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.411 "hdgst": ${hdgst:-false}, 00:35:40.411 "ddgst": ${ddgst:-false} 00:35:40.411 }, 00:35:40.411 "method": "bdev_nvme_attach_controller" 00:35:40.411 } 00:35:40.411 EOF 00:35:40.411 )") 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:40.411 "params": { 00:35:40.411 "name": "Nvme0", 00:35:40.411 "trtype": "tcp", 00:35:40.411 "traddr": "10.0.0.2", 00:35:40.411 "adrfam": "ipv4", 00:35:40.411 "trsvcid": "4420", 00:35:40.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.411 "hdgst": false, 00:35:40.411 "ddgst": false 00:35:40.411 }, 00:35:40.411 "method": "bdev_nvme_attach_controller" 00:35:40.411 }' 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:40.411 19:41:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:40.671 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:40.671 ... 00:35:40.671 fio-3.35 00:35:40.671 Starting 3 threads 00:35:40.671 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.238 00:35:47.238 filename0: (groupid=0, jobs=1): err= 0: pid=1861774: Mon Jul 15 19:41:56 2024 00:35:47.238 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(160MiB/5044msec) 00:35:47.238 slat (nsec): min=6215, max=26351, avg=9158.51, stdev=2783.57 00:35:47.238 clat (usec): min=4143, max=52468, avg=11763.46, stdev=12956.73 00:35:47.238 lat (usec): min=4150, max=52479, avg=11772.62, stdev=12956.88 00:35:47.238 clat percentiles (usec): 00:35:47.238 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5932], 00:35:47.238 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7898], 00:35:47.238 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[46400], 95.00th=[48497], 00:35:47.238 | 99.00th=[50594], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:35:47.238 | 99.99th=[52691] 00:35:47.238 bw ( KiB/s): min=25600, max=43776, per=32.55%, avg=32742.40, stdev=5313.26, samples=10 00:35:47.238 iops : min= 200, max= 342, avg=255.80, stdev=41.51, samples=10 00:35:47.238 lat (msec) : 10=82.28%, 20=6.79%, 50=9.37%, 100=1.56% 00:35:47.238 cpu : usr=95.38%, sys=4.30%, ctx=10, majf=0, minf=71 00:35:47.238 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:47.238 filename0: (groupid=0, jobs=1): err= 0: pid=1861775: Mon Jul 15 19:41:56 2024 00:35:47.238 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(170MiB/5033msec) 00:35:47.238 slat (nsec): min=6248, max=27979, avg=9543.36, stdev=2731.49 00:35:47.238 clat (usec): min=3839, max=51133, avg=11070.22, stdev=12139.79 00:35:47.238 lat (usec): min=3846, max=51146, avg=11079.77, stdev=12139.91 00:35:47.238 clat percentiles (usec): 00:35:47.238 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5800], 00:35:47.238 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7635], 00:35:47.238 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11600], 95.00th=[47973], 00:35:47.238 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:35:47.238 | 99.99th=[51119] 00:35:47.238 bw ( KiB/s): min=24064, max=44032, per=34.58%, avg=34790.40, stdev=7055.83, samples=10 00:35:47.238 iops : min= 188, max= 344, avg=271.80, stdev=55.12, samples=10 00:35:47.238 lat (msec) : 4=0.22%, 10=83.26%, 20=7.05%, 50=8.59%, 100=0.88% 00:35:47.238 cpu : usr=94.95%, sys=4.75%, ctx=10, majf=0, minf=40 00:35:47.238 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 issued rwts: total=1362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:47.238 filename0: (groupid=0, jobs=1): err= 0: pid=1861776: Mon Jul 15 19:41:56 2024 00:35:47.238 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(165MiB/5003msec) 00:35:47.238 slat (nsec): min=6244, max=23239, avg=9374.78, stdev=2513.02 00:35:47.238 clat (usec): min=3039, max=52612, avg=11346.89, stdev=12423.47 00:35:47.238 lat (usec): min=3046, max=52635, avg=11356.26, stdev=12423.56 00:35:47.238 clat percentiles (usec): 00:35:47.238 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5473], 00:35:47.238 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7963], 00:35:47.238 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[13173], 95.00th=[48497], 00:35:47.238 | 99.00th=[50594], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:35:47.238 | 99.99th=[52691] 00:35:47.238 bw ( KiB/s): min=21504, max=45568, per=33.90%, avg=34104.89, stdev=7851.71, samples=9 00:35:47.238 iops : min= 168, max= 356, avg=266.44, stdev=61.34, samples=9 00:35:47.238 lat (msec) : 4=0.15%, 10=81.76%, 20=8.33%, 50=7.57%, 100=2.20% 00:35:47.238 cpu : usr=94.94%, sys=4.74%, ctx=7, majf=0, minf=156 00:35:47.238 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.238 issued rwts: total=1321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.238 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:47.238 00:35:47.238 Run status group 0 (all jobs): 00:35:47.238 READ: bw=98.2MiB/s (103MB/s), 31.7MiB/s-33.8MiB/s (33.3MB/s-35.5MB/s), io=496MiB (520MB), run=5003-5044msec 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:47.238 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 bdev_null0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 [2024-07-15 19:41:57.085544] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 bdev_null1 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 bdev_null2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.239 { 00:35:47.239 "params": { 00:35:47.239 "name": "Nvme$subsystem", 00:35:47.239 "trtype": "$TEST_TRANSPORT", 00:35:47.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.239 "adrfam": "ipv4", 00:35:47.239 "trsvcid": "$NVMF_PORT", 00:35:47.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.239 "hdgst": ${hdgst:-false}, 00:35:47.239 "ddgst": ${ddgst:-false} 00:35:47.239 }, 00:35:47.239 "method": "bdev_nvme_attach_controller" 00:35:47.239 } 00:35:47.239 EOF 00:35:47.239 )") 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:47.239 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.240 { 00:35:47.240 "params": { 00:35:47.240 "name": "Nvme$subsystem", 00:35:47.240 "trtype": "$TEST_TRANSPORT", 00:35:47.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.240 "adrfam": "ipv4", 00:35:47.240 "trsvcid": "$NVMF_PORT", 00:35:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.240 "hdgst": ${hdgst:-false}, 00:35:47.240 "ddgst": ${ddgst:-false} 00:35:47.240 }, 00:35:47.240 "method": "bdev_nvme_attach_controller" 00:35:47.240 } 00:35:47.240 EOF 00:35:47.240 )") 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:47.240 { 00:35:47.240 "params": { 00:35:47.240 "name": "Nvme$subsystem", 00:35:47.240 "trtype": "$TEST_TRANSPORT", 00:35:47.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:47.240 "adrfam": "ipv4", 00:35:47.240 "trsvcid": "$NVMF_PORT", 00:35:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:47.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:47.240 "hdgst": ${hdgst:-false}, 00:35:47.240 "ddgst": ${ddgst:-false} 00:35:47.240 }, 00:35:47.240 "method": "bdev_nvme_attach_controller" 00:35:47.240 } 00:35:47.240 EOF 00:35:47.240 )") 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:47.240 "params": { 00:35:47.240 "name": "Nvme0", 00:35:47.240 "trtype": "tcp", 00:35:47.240 "traddr": "10.0.0.2", 00:35:47.240 "adrfam": "ipv4", 00:35:47.240 "trsvcid": "4420", 00:35:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.240 "hdgst": false, 00:35:47.240 "ddgst": false 00:35:47.240 }, 00:35:47.240 "method": "bdev_nvme_attach_controller" 00:35:47.240 },{ 00:35:47.240 "params": { 00:35:47.240 "name": "Nvme1", 00:35:47.240 "trtype": "tcp", 00:35:47.240 "traddr": "10.0.0.2", 00:35:47.240 "adrfam": "ipv4", 00:35:47.240 "trsvcid": "4420", 00:35:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:47.240 "hdgst": false, 00:35:47.240 "ddgst": false 00:35:47.240 }, 00:35:47.240 "method": "bdev_nvme_attach_controller" 00:35:47.240 },{ 00:35:47.240 "params": { 00:35:47.240 "name": "Nvme2", 00:35:47.240 "trtype": "tcp", 00:35:47.240 "traddr": "10.0.0.2", 00:35:47.240 "adrfam": "ipv4", 00:35:47.240 "trsvcid": "4420", 00:35:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:47.240 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:47.240 "hdgst": false, 00:35:47.240 "ddgst": false 00:35:47.240 }, 00:35:47.240 "method": "bdev_nvme_attach_controller" 00:35:47.240 }' 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:47.240 19:41:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:47.240 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:47.240 ... 00:35:47.240 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:47.240 ... 00:35:47.240 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:47.240 ... 00:35:47.240 fio-3.35 00:35:47.240 Starting 24 threads 00:35:47.240 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.449 00:35:59.449 filename0: (groupid=0, jobs=1): err= 0: pid=1863036: Mon Jul 15 19:42:08 2024 00:35:59.449 read: IOPS=574, BW=2299KiB/s (2354kB/s)(22.5MiB/10023msec) 00:35:59.449 slat (nsec): min=6994, max=39269, avg=18678.19, stdev=5285.29 00:35:59.449 clat (usec): min=7387, max=44702, avg=27680.56, stdev=1820.62 00:35:59.449 lat (usec): min=7411, max=44717, avg=27699.24, stdev=1820.92 00:35:59.449 clat percentiles (usec): 00:35:59.449 | 1.00th=[20317], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.449 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.449 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.449 | 99.00th=[29230], 99.50th=[29492], 99.90th=[38011], 99.95th=[40633], 00:35:59.449 | 99.99th=[44827] 00:35:59.449 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2294.45, stdev=51.96, samples=20 00:35:59.449 iops : min= 544, max= 608, avg=573.60, stdev=13.00, samples=20 00:35:59.449 lat (msec) : 10=0.56%, 20=0.42%, 50=99.03% 00:35:59.449 cpu : usr=98.48%, sys=1.14%, ctx=5, majf=0, minf=9 00:35:59.449 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:59.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.449 filename0: (groupid=0, jobs=1): err= 0: pid=1863037: Mon Jul 15 19:42:08 2024 00:35:59.449 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:35:59.449 slat (nsec): min=7077, max=38210, avg=18415.92, stdev=5109.13 00:35:59.449 clat (usec): min=12219, max=35364, avg=27792.13, stdev=987.52 00:35:59.449 lat (usec): min=12234, max=35379, avg=27810.55, stdev=987.56 00:35:59.449 clat percentiles (usec): 00:35:59.449 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.449 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.449 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.449 | 99.00th=[29230], 99.50th=[29492], 99.90th=[35390], 99.95th=[35390], 00:35:59.449 | 99.99th=[35390] 00:35:59.449 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2284.10, stdev=46.69, samples=20 00:35:59.449 iops : min= 544, max= 576, avg=571.00, stdev=11.67, samples=20 00:35:59.449 lat (msec) : 20=0.28%, 50=99.72% 00:35:59.449 cpu : usr=98.61%, sys=1.01%, ctx=13, majf=0, minf=9 00:35:59.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:59.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.449 filename0: (groupid=0, jobs=1): err= 0: pid=1863038: Mon Jul 15 19:42:08 2024 00:35:59.449 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10003msec) 00:35:59.449 slat (nsec): min=7586, max=52779, avg=23774.37, stdev=7079.60 00:35:59.449 clat (usec): min=20933, max=39649, avg=27797.93, stdev=799.73 00:35:59.449 lat (usec): min=20952, max=39693, avg=27821.71, stdev=800.60 00:35:59.449 clat percentiles (usec): 00:35:59.449 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.449 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:59.449 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.449 | 99.00th=[28967], 99.50th=[29492], 99.90th=[39584], 99.95th=[39584], 00:35:59.449 | 99.99th=[39584] 00:35:59.449 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2277.26, stdev=53.20, samples=19 00:35:59.449 iops : min= 544, max= 576, avg=569.32, stdev=13.30, samples=19 00:35:59.449 lat (msec) : 50=100.00% 00:35:59.449 cpu : usr=98.71%, sys=0.92%, ctx=5, majf=0, minf=9 00:35:59.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:59.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.449 filename0: (groupid=0, jobs=1): err= 0: pid=1863039: Mon Jul 15 19:42:08 2024 00:35:59.449 read: IOPS=576, BW=2306KiB/s (2361kB/s)(22.6MiB/10027msec) 00:35:59.449 slat (nsec): min=7036, max=39558, avg=17171.67, stdev=5795.59 00:35:59.449 clat (usec): min=12218, max=47323, avg=27600.98, stdev=2179.22 00:35:59.449 lat (usec): min=12235, max=47350, avg=27618.15, stdev=2180.11 00:35:59.449 clat percentiles (usec): 00:35:59.449 | 1.00th=[15664], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:35:59.449 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.449 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.449 | 99.00th=[32900], 99.50th=[36963], 99.90th=[47449], 99.95th=[47449], 00:35:59.449 | 99.99th=[47449] 00:35:59.449 bw ( KiB/s): min= 2176, max= 2608, per=4.19%, avg=2305.70, stdev=81.24, samples=20 00:35:59.449 iops : min= 544, max= 652, avg=576.40, stdev=20.31, samples=20 00:35:59.449 lat (msec) : 20=2.30%, 50=97.70% 00:35:59.449 cpu : usr=98.75%, sys=0.86%, ctx=16, majf=0, minf=9 00:35:59.449 IO depths : 1=4.7%, 2=10.6%, 4=24.2%, 8=52.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:59.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 issued rwts: total=5780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.449 filename0: (groupid=0, jobs=1): err= 0: pid=1863040: Mon Jul 15 19:42:08 2024 00:35:59.449 read: IOPS=565, BW=2263KiB/s (2317kB/s)(22.1MiB/10006msec) 00:35:59.449 slat (nsec): min=5590, max=59341, avg=16467.09, stdev=7070.67 00:35:59.449 clat (usec): min=15581, max=76223, avg=28193.66, stdev=2647.32 00:35:59.449 lat (usec): min=15595, max=76238, avg=28210.12, stdev=2647.14 00:35:59.449 clat percentiles (usec): 00:35:59.449 | 1.00th=[23725], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:59.449 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:59.449 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:35:59.449 | 99.00th=[39584], 99.50th=[42206], 99.90th=[62129], 99.95th=[76022], 00:35:59.449 | 99.99th=[76022] 00:35:59.449 bw ( KiB/s): min= 2036, max= 2308, per=4.11%, avg=2260.40, stdev=63.38, samples=20 00:35:59.449 iops : min= 509, max= 577, avg=565.10, stdev=15.84, samples=20 00:35:59.449 lat (msec) : 20=0.41%, 50=99.31%, 100=0.28% 00:35:59.449 cpu : usr=98.49%, sys=1.13%, ctx=15, majf=0, minf=9 00:35:59.449 IO depths : 1=0.6%, 2=2.1%, 4=7.1%, 8=74.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:35:59.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.449 complete : 0=0.0%, 4=90.8%, 8=7.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename0: (groupid=0, jobs=1): err= 0: pid=1863041: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10020msec) 00:35:59.450 slat (nsec): min=6010, max=57021, avg=20023.81, stdev=8077.82 00:35:59.450 clat (usec): min=21034, max=57493, avg=27910.22, stdev=1646.74 00:35:59.450 lat (usec): min=21050, max=57509, avg=27930.25, stdev=1645.90 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.450 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:59.450 | 99.00th=[28967], 99.50th=[29492], 99.90th=[57410], 99.95th=[57410], 00:35:59.450 | 99.99th=[57410] 00:35:59.450 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2275.20, stdev=66.61, samples=20 00:35:59.450 iops : min= 513, max= 576, avg=568.80, stdev=16.65, samples=20 00:35:59.450 lat (msec) : 50=99.72%, 100=0.28% 00:35:59.450 cpu : usr=98.81%, sys=0.81%, ctx=8, majf=0, minf=9 00:35:59.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename0: (groupid=0, jobs=1): err= 0: pid=1863042: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10022msec) 00:35:59.450 slat (nsec): min=4117, max=54588, avg=23513.86, stdev=7428.64 00:35:59.450 clat (usec): min=17778, max=68628, avg=27882.84, stdev=1877.15 00:35:59.450 lat (usec): min=17795, max=68640, avg=27906.35, stdev=1876.18 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.450 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[28967], 99.50th=[29492], 99.90th=[57934], 99.95th=[68682], 00:35:59.450 | 99.99th=[68682] 00:35:59.450 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2275.00, stdev=67.32, samples=20 00:35:59.450 iops : min= 512, max= 576, avg=568.75, stdev=16.83, samples=20 00:35:59.450 lat (msec) : 20=0.09%, 50=99.63%, 100=0.28% 00:35:59.450 cpu : usr=98.72%, sys=0.89%, ctx=8, majf=0, minf=10 00:35:59.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename0: (groupid=0, jobs=1): err= 0: pid=1863043: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:35:59.450 slat (nsec): min=4671, max=49354, avg=22599.29, stdev=7685.14 00:35:59.450 clat (usec): min=21031, max=72248, avg=27892.52, stdev=2462.28 00:35:59.450 lat (usec): min=21040, max=72261, avg=27915.12, stdev=2461.68 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:59.450 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[29230], 99.50th=[31851], 99.90th=[71828], 99.95th=[71828], 00:35:59.450 | 99.99th=[71828] 00:35:59.450 bw ( KiB/s): min= 2052, max= 2304, per=4.13%, avg=2271.95, stdev=69.64, samples=20 00:35:59.450 iops : min= 513, max= 576, avg=567.95, stdev=17.39, samples=20 00:35:59.450 lat (msec) : 50=99.72%, 100=0.28% 00:35:59.450 cpu : usr=98.78%, sys=0.84%, ctx=7, majf=0, minf=9 00:35:59.450 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863044: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=571, BW=2288KiB/s (2343kB/s)(22.4MiB/10004msec) 00:35:59.450 slat (nsec): min=6636, max=87210, avg=55499.59, stdev=12780.49 00:35:59.450 clat (usec): min=11709, max=68180, avg=27526.94, stdev=2313.38 00:35:59.450 lat (usec): min=11744, max=68204, avg=27582.44, stdev=2311.61 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[20317], 5.00th=[26870], 10.00th=[26870], 20.00th=[27132], 00:35:59.450 | 30.00th=[27132], 40.00th=[27395], 50.00th=[27395], 60.00th=[27657], 00:35:59.450 | 70.00th=[27657], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[30278], 99.50th=[35390], 99.90th=[60556], 99.95th=[60556], 00:35:59.450 | 99.99th=[68682] 00:35:59.450 bw ( KiB/s): min= 2052, max= 2416, per=4.15%, avg=2281.47, stdev=73.46, samples=19 00:35:59.450 iops : min= 513, max= 604, avg=570.37, stdev=18.36, samples=19 00:35:59.450 lat (msec) : 20=0.96%, 50=98.76%, 100=0.28% 00:35:59.450 cpu : usr=98.61%, sys=0.97%, ctx=7, majf=0, minf=11 00:35:59.450 IO depths : 1=4.4%, 2=10.1%, 4=23.0%, 8=54.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863045: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=576, BW=2305KiB/s (2361kB/s)(22.5MiB/10015msec) 00:35:59.450 slat (nsec): min=6890, max=49886, avg=18757.46, stdev=8049.22 00:35:59.450 clat (usec): min=6797, max=42666, avg=27615.78, stdev=2096.84 00:35:59.450 lat (usec): min=6804, max=42678, avg=27634.54, stdev=2097.09 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[17957], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.450 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[29492], 99.50th=[29492], 99.90th=[42730], 99.95th=[42730], 00:35:59.450 | 99.99th=[42730] 00:35:59.450 bw ( KiB/s): min= 2176, max= 2656, per=4.19%, avg=2302.32, stdev=98.04, samples=19 00:35:59.450 iops : min= 544, max= 664, avg=575.58, stdev=24.51, samples=19 00:35:59.450 lat (msec) : 10=0.55%, 20=1.21%, 50=98.23% 00:35:59.450 cpu : usr=98.57%, sys=1.05%, ctx=7, majf=0, minf=9 00:35:59.450 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863046: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10021msec) 00:35:59.450 slat (nsec): min=4161, max=51067, avg=23956.81, stdev=7283.67 00:35:59.450 clat (usec): min=21056, max=58014, avg=27852.73, stdev=1676.19 00:35:59.450 lat (usec): min=21091, max=58026, avg=27876.69, stdev=1675.42 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:59.450 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[28967], 99.50th=[29492], 99.90th=[57934], 99.95th=[57934], 00:35:59.450 | 99.99th=[57934] 00:35:59.450 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2275.20, stdev=66.61, samples=20 00:35:59.450 iops : min= 513, max= 576, avg=568.80, stdev=16.65, samples=20 00:35:59.450 lat (msec) : 50=99.72%, 100=0.28% 00:35:59.450 cpu : usr=98.70%, sys=0.91%, ctx=12, majf=0, minf=9 00:35:59.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863047: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10016msec) 00:35:59.450 slat (nsec): min=6134, max=54403, avg=23002.40, stdev=7059.58 00:35:59.450 clat (usec): min=21040, max=52751, avg=27838.08, stdev=1412.00 00:35:59.450 lat (usec): min=21068, max=52769, avg=27861.08, stdev=1411.57 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:59.450 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.450 | 99.00th=[28967], 99.50th=[29492], 99.90th=[52691], 99.95th=[52691], 00:35:59.450 | 99.99th=[52691] 00:35:59.450 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2276.35, stdev=66.77, samples=20 00:35:59.450 iops : min= 512, max= 576, avg=569.05, stdev=16.70, samples=20 00:35:59.450 lat (msec) : 50=99.72%, 100=0.28% 00:35:59.450 cpu : usr=98.77%, sys=0.85%, ctx=15, majf=0, minf=9 00:35:59.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863048: Mon Jul 15 19:42:08 2024 00:35:59.450 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10023msec) 00:35:59.450 slat (nsec): min=7047, max=34532, avg=12680.71, stdev=4306.20 00:35:59.450 clat (usec): min=18851, max=37181, avg=27888.92, stdev=1352.63 00:35:59.450 lat (usec): min=18860, max=37201, avg=27901.60, stdev=1352.64 00:35:59.450 clat percentiles (usec): 00:35:59.450 | 1.00th=[20841], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:59.450 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:59.450 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:59.450 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:35:59.450 | 99.99th=[36963] 00:35:59.450 bw ( KiB/s): min= 2176, max= 2352, per=4.16%, avg=2284.10, stdev=49.22, samples=20 00:35:59.450 iops : min= 544, max= 588, avg=571.00, stdev=12.30, samples=20 00:35:59.450 lat (msec) : 20=0.63%, 50=99.37% 00:35:59.450 cpu : usr=98.60%, sys=1.01%, ctx=5, majf=0, minf=9 00:35:59.450 IO depths : 1=5.4%, 2=11.2%, 4=24.6%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:59.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.450 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.450 filename1: (groupid=0, jobs=1): err= 0: pid=1863049: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=574, BW=2296KiB/s (2352kB/s)(22.4MiB/10005msec) 00:35:59.451 slat (nsec): min=5673, max=36655, avg=12263.84, stdev=4905.44 00:35:59.451 clat (usec): min=7660, max=30752, avg=27763.83, stdev=1529.19 00:35:59.451 lat (usec): min=7670, max=30773, avg=27776.09, stdev=1529.32 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:59.451 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30540], 99.95th=[30802], 00:35:59.451 | 99.99th=[30802] 00:35:59.451 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2297.26, stdev=51.80, samples=19 00:35:59.451 iops : min= 544, max= 608, avg=574.32, stdev=12.95, samples=19 00:35:59.451 lat (msec) : 10=0.28%, 20=0.56%, 50=99.16% 00:35:59.451 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=9 00:35:59.451 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename1: (groupid=0, jobs=1): err= 0: pid=1863050: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=574, BW=2297KiB/s (2352kB/s)(22.4MiB/10003msec) 00:35:59.451 slat (nsec): min=4270, max=40378, avg=18887.03, stdev=5193.54 00:35:59.451 clat (usec): min=7668, max=40162, avg=27697.37, stdev=1654.69 00:35:59.451 lat (usec): min=7687, max=40181, avg=27716.25, stdev=1655.27 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.451 | 99.00th=[29230], 99.50th=[29492], 99.90th=[39584], 99.95th=[39584], 00:35:59.451 | 99.99th=[40109] 00:35:59.451 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2297.26, stdev=52.07, samples=19 00:35:59.451 iops : min= 544, max= 608, avg=574.32, stdev=13.02, samples=19 00:35:59.451 lat (msec) : 10=0.28%, 20=0.66%, 50=99.06% 00:35:59.451 cpu : usr=98.76%, sys=0.83%, ctx=9, majf=0, minf=9 00:35:59.451 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename1: (groupid=0, jobs=1): err= 0: pid=1863051: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10021msec) 00:35:59.451 slat (nsec): min=4161, max=51076, avg=22080.93, stdev=8165.70 00:35:59.451 clat (usec): min=18879, max=57688, avg=27905.96, stdev=2136.70 00:35:59.451 lat (usec): min=18898, max=57700, avg=27928.04, stdev=2136.64 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[21103], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:59.451 | 99.00th=[34341], 99.50th=[35914], 99.90th=[57410], 99.95th=[57934], 00:35:59.451 | 99.99th=[57934] 00:35:59.451 bw ( KiB/s): min= 2052, max= 2336, per=4.14%, avg=2275.15, stdev=65.08, samples=20 00:35:59.451 iops : min= 513, max= 584, avg=568.75, stdev=16.28, samples=20 00:35:59.451 lat (msec) : 20=0.35%, 50=99.37%, 100=0.28% 00:35:59.451 cpu : usr=98.65%, sys=0.97%, ctx=16, majf=0, minf=9 00:35:59.451 IO depths : 1=2.8%, 2=7.9%, 4=23.1%, 8=56.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863052: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=584, BW=2340KiB/s (2396kB/s)(22.9MiB/10008msec) 00:35:59.451 slat (nsec): min=5024, max=61016, avg=15497.68, stdev=5901.17 00:35:59.451 clat (usec): min=11405, max=64563, avg=27230.59, stdev=3721.32 00:35:59.451 lat (usec): min=11436, max=64578, avg=27246.08, stdev=3721.78 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[16450], 5.00th=[20317], 10.00th=[23200], 20.00th=[27395], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[29492], 00:35:59.451 | 99.00th=[36439], 99.50th=[40109], 99.90th=[64750], 99.95th=[64750], 00:35:59.451 | 99.99th=[64750] 00:35:59.451 bw ( KiB/s): min= 2048, max= 2592, per=4.25%, avg=2334.95, stdev=116.32, samples=20 00:35:59.451 iops : min= 512, max= 648, avg=583.70, stdev=29.09, samples=20 00:35:59.451 lat (msec) : 20=4.68%, 50=95.05%, 100=0.27% 00:35:59.451 cpu : usr=98.47%, sys=1.12%, ctx=8, majf=0, minf=9 00:35:59.451 IO depths : 1=3.8%, 2=8.2%, 4=18.7%, 8=59.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=92.5%, 8=2.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863053: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10020msec) 00:35:59.451 slat (nsec): min=4291, max=38177, avg=15280.25, stdev=4381.82 00:35:59.451 clat (usec): min=19157, max=38261, avg=27852.56, stdev=1043.13 00:35:59.451 lat (usec): min=19166, max=38275, avg=27867.84, stdev=1043.15 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[23725], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:59.451 | 99.00th=[29492], 99.50th=[32375], 99.90th=[38011], 99.95th=[38011], 00:35:59.451 | 99.99th=[38011] 00:35:59.451 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2284.80, stdev=46.89, samples=20 00:35:59.451 iops : min= 544, max= 576, avg=571.20, stdev=11.72, samples=20 00:35:59.451 lat (msec) : 20=0.38%, 50=99.62% 00:35:59.451 cpu : usr=98.62%, sys=0.99%, ctx=17, majf=0, minf=9 00:35:59.451 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863054: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=601, BW=2406KiB/s (2463kB/s)(23.5MiB/10006msec) 00:35:59.451 slat (nsec): min=5431, max=52615, avg=13865.27, stdev=7056.12 00:35:59.451 clat (usec): min=12423, max=62434, avg=26521.83, stdev=4528.42 00:35:59.451 lat (usec): min=12438, max=62449, avg=26535.70, stdev=4528.94 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[15664], 5.00th=[18482], 10.00th=[20055], 20.00th=[23462], 00:35:59.451 | 30.00th=[25560], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[27919], 90.00th=[30278], 95.00th=[33817], 00:35:59.451 | 99.00th=[36439], 99.50th=[40109], 99.90th=[62653], 99.95th=[62653], 00:35:59.451 | 99.99th=[62653] 00:35:59.451 bw ( KiB/s): min= 2048, max= 2640, per=4.37%, avg=2403.40, stdev=137.11, samples=20 00:35:59.451 iops : min= 512, max= 660, avg=600.85, stdev=34.28, samples=20 00:35:59.451 lat (msec) : 20=10.27%, 50=89.46%, 100=0.27% 00:35:59.451 cpu : usr=98.71%, sys=0.90%, ctx=17, majf=0, minf=9 00:35:59.451 IO depths : 1=1.2%, 2=2.7%, 4=8.3%, 8=74.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=90.0%, 8=6.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=6018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863055: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.3MiB/10014msec) 00:35:59.451 slat (nsec): min=6657, max=49505, avg=17671.76, stdev=6689.77 00:35:59.451 clat (usec): min=18122, max=71187, avg=27936.07, stdev=2596.67 00:35:59.451 lat (usec): min=18140, max=71205, avg=27953.74, stdev=2596.18 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[22152], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:59.451 | 99.00th=[29492], 99.50th=[38536], 99.90th=[70779], 99.95th=[70779], 00:35:59.451 | 99.99th=[70779] 00:35:59.451 bw ( KiB/s): min= 2048, max= 2352, per=4.13%, avg=2272.80, stdev=72.02, samples=20 00:35:59.451 iops : min= 512, max= 588, avg=568.20, stdev=18.00, samples=20 00:35:59.451 lat (msec) : 20=0.39%, 50=99.33%, 100=0.28% 00:35:59.451 cpu : usr=98.65%, sys=0.98%, ctx=10, majf=0, minf=9 00:35:59.451 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863056: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10021msec) 00:35:59.451 slat (nsec): min=5889, max=58245, avg=22791.17, stdev=7229.72 00:35:59.451 clat (usec): min=18853, max=73646, avg=27880.42, stdev=1853.19 00:35:59.451 lat (usec): min=18868, max=73662, avg=27903.21, stdev=1852.41 00:35:59.451 clat percentiles (usec): 00:35:59.451 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.451 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.451 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:59.451 | 99.00th=[29230], 99.50th=[29492], 99.90th=[57410], 99.95th=[73925], 00:35:59.451 | 99.99th=[73925] 00:35:59.451 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2275.20, stdev=66.61, samples=20 00:35:59.451 iops : min= 513, max= 576, avg=568.80, stdev=16.65, samples=20 00:35:59.451 lat (msec) : 20=0.05%, 50=99.67%, 100=0.28% 00:35:59.451 cpu : usr=98.88%, sys=0.74%, ctx=11, majf=0, minf=9 00:35:59.451 IO depths : 1=6.1%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:59.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.451 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.451 filename2: (groupid=0, jobs=1): err= 0: pid=1863057: Mon Jul 15 19:42:08 2024 00:35:59.451 read: IOPS=575, BW=2300KiB/s (2356kB/s)(22.5MiB/10005msec) 00:35:59.451 slat (nsec): min=4626, max=39729, avg=16790.53, stdev=5703.44 00:35:59.451 clat (usec): min=7430, max=42889, avg=27678.33, stdev=2235.10 00:35:59.452 lat (usec): min=7445, max=42901, avg=27695.12, stdev=2235.85 00:35:59.452 clat percentiles (usec): 00:35:59.452 | 1.00th=[15795], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.452 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:59.452 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:59.452 | 99.00th=[32375], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:35:59.452 | 99.99th=[42730] 00:35:59.452 bw ( KiB/s): min= 2176, max= 2512, per=4.18%, avg=2295.20, stdev=78.29, samples=20 00:35:59.452 iops : min= 544, max= 628, avg=573.80, stdev=19.57, samples=20 00:35:59.452 lat (msec) : 10=0.28%, 20=1.53%, 50=98.19% 00:35:59.452 cpu : usr=98.72%, sys=0.89%, ctx=15, majf=0, minf=9 00:35:59.452 IO depths : 1=4.6%, 2=10.5%, 4=24.3%, 8=52.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:35:59.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 issued rwts: total=5754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.452 filename2: (groupid=0, jobs=1): err= 0: pid=1863058: Mon Jul 15 19:42:08 2024 00:35:59.452 read: IOPS=570, BW=2284KiB/s (2339kB/s)(22.3MiB/10004msec) 00:35:59.452 slat (nsec): min=7113, max=70679, avg=23770.08, stdev=7430.73 00:35:59.452 clat (usec): min=18719, max=57357, avg=27812.84, stdev=1273.31 00:35:59.452 lat (usec): min=18728, max=57387, avg=27836.61, stdev=1273.71 00:35:59.452 clat percentiles (usec): 00:35:59.452 | 1.00th=[25035], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:59.452 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:35:59.452 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:59.452 | 99.00th=[29230], 99.50th=[35914], 99.90th=[40633], 99.95th=[40633], 00:35:59.452 | 99.99th=[57410] 00:35:59.452 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2277.05, stdev=53.61, samples=19 00:35:59.452 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:35:59.452 lat (msec) : 20=0.42%, 50=99.54%, 100=0.04% 00:35:59.452 cpu : usr=98.81%, sys=0.80%, ctx=9, majf=0, minf=9 00:35:59.452 IO depths : 1=5.3%, 2=11.3%, 4=24.5%, 8=51.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:59.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.452 filename2: (groupid=0, jobs=1): err= 0: pid=1863059: Mon Jul 15 19:42:08 2024 00:35:59.452 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10004msec) 00:35:59.452 slat (nsec): min=9027, max=94860, avg=53829.49, stdev=14647.15 00:35:59.452 clat (usec): min=12853, max=74647, avg=27751.86, stdev=2734.01 00:35:59.452 lat (usec): min=12885, max=74660, avg=27805.69, stdev=2732.96 00:35:59.452 clat percentiles (usec): 00:35:59.452 | 1.00th=[19792], 5.00th=[26346], 10.00th=[27132], 20.00th=[27395], 00:35:59.452 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:59.452 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:35:59.452 | 99.00th=[35390], 99.50th=[39584], 99.90th=[60031], 99.95th=[60556], 00:35:59.452 | 99.99th=[74974] 00:35:59.452 bw ( KiB/s): min= 2100, max= 2400, per=4.14%, avg=2276.42, stdev=60.35, samples=19 00:35:59.452 iops : min= 525, max= 600, avg=569.11, stdev=15.09, samples=19 00:35:59.452 lat (msec) : 20=1.10%, 50=98.62%, 100=0.28% 00:35:59.452 cpu : usr=98.89%, sys=0.68%, ctx=8, majf=0, minf=9 00:35:59.452 IO depths : 1=0.2%, 2=2.9%, 4=11.7%, 8=69.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:35:59.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 complete : 0=0.0%, 4=91.6%, 8=5.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.452 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:59.452 00:35:59.452 Run status group 0 (all jobs): 00:35:59.452 READ: bw=53.7MiB/s (56.3MB/s), 2263KiB/s-2406KiB/s (2317kB/s-2463kB/s), io=538MiB (564MB), run=10003-10027msec 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 bdev_null0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 [2024-07-15 19:42:08.824863] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 bdev_null1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.452 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.453 { 00:35:59.453 "params": { 00:35:59.453 "name": "Nvme$subsystem", 00:35:59.453 "trtype": "$TEST_TRANSPORT", 00:35:59.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.453 "adrfam": "ipv4", 00:35:59.453 "trsvcid": "$NVMF_PORT", 00:35:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.453 "hdgst": ${hdgst:-false}, 00:35:59.453 "ddgst": ${ddgst:-false} 00:35:59.453 }, 00:35:59.453 "method": "bdev_nvme_attach_controller" 00:35:59.453 } 00:35:59.453 EOF 00:35:59.453 )") 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.453 { 00:35:59.453 "params": { 00:35:59.453 "name": "Nvme$subsystem", 00:35:59.453 "trtype": "$TEST_TRANSPORT", 00:35:59.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.453 "adrfam": "ipv4", 00:35:59.453 "trsvcid": "$NVMF_PORT", 00:35:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.453 "hdgst": ${hdgst:-false}, 00:35:59.453 "ddgst": ${ddgst:-false} 00:35:59.453 }, 00:35:59.453 "method": "bdev_nvme_attach_controller" 00:35:59.453 } 00:35:59.453 EOF 00:35:59.453 )") 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:59.453 "params": { 00:35:59.453 "name": "Nvme0", 00:35:59.453 "trtype": "tcp", 00:35:59.453 "traddr": "10.0.0.2", 00:35:59.453 "adrfam": "ipv4", 00:35:59.453 "trsvcid": "4420", 00:35:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.453 "hdgst": false, 00:35:59.453 "ddgst": false 00:35:59.453 }, 00:35:59.453 "method": "bdev_nvme_attach_controller" 00:35:59.453 },{ 00:35:59.453 "params": { 00:35:59.453 "name": "Nvme1", 00:35:59.453 "trtype": "tcp", 00:35:59.453 "traddr": "10.0.0.2", 00:35:59.453 "adrfam": "ipv4", 00:35:59.453 "trsvcid": "4420", 00:35:59.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:59.453 "hdgst": false, 00:35:59.453 "ddgst": false 00:35:59.453 }, 00:35:59.453 "method": "bdev_nvme_attach_controller" 00:35:59.453 }' 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:59.453 19:42:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.453 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:59.453 ... 00:35:59.453 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:59.453 ... 00:35:59.453 fio-3.35 00:35:59.453 Starting 4 threads 00:35:59.453 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.786 00:36:04.786 filename0: (groupid=0, jobs=1): err= 0: pid=1865366: Mon Jul 15 19:42:14 2024 00:36:04.786 read: IOPS=2544, BW=19.9MiB/s (20.8MB/s)(99.4MiB/5003msec) 00:36:04.786 slat (nsec): min=6174, max=27936, avg=8831.21, stdev=2826.91 00:36:04.786 clat (usec): min=1641, max=44088, avg=3118.14, stdev=1122.73 00:36:04.786 lat (usec): min=1647, max=44114, avg=3126.97, stdev=1122.80 00:36:04.786 clat percentiles (usec): 00:36:04.786 | 1.00th=[ 2278], 5.00th=[ 2671], 10.00th=[ 2769], 20.00th=[ 2835], 00:36:04.786 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:36:04.786 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3621], 95.00th=[ 4293], 00:36:04.786 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[43779], 00:36:04.786 | 99.99th=[44303] 00:36:04.786 bw ( KiB/s): min=19200, max=21344, per=24.57%, avg=20358.40, stdev=646.64, samples=10 00:36:04.786 iops : min= 2400, max= 2668, avg=2544.80, stdev=80.83, samples=10 00:36:04.786 lat (msec) : 2=0.27%, 4=91.66%, 10=8.00%, 50=0.06% 00:36:04.786 cpu : usr=96.14%, sys=3.52%, ctx=8, majf=0, minf=18 00:36:04.786 IO depths : 1=0.1%, 2=1.3%, 4=71.9%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 issued rwts: total=12729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:04.787 filename0: (groupid=0, jobs=1): err= 0: pid=1865367: Mon Jul 15 19:42:14 2024 00:36:04.787 read: IOPS=2648, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:36:04.787 slat (nsec): min=6185, max=36389, avg=8347.42, stdev=2745.55 00:36:04.787 clat (usec): min=676, max=5312, avg=2995.86, stdev=459.26 00:36:04.787 lat (usec): min=683, max=5324, avg=3004.20, stdev=459.21 00:36:04.787 clat percentiles (usec): 00:36:04.787 | 1.00th=[ 1090], 5.00th=[ 2474], 10.00th=[ 2737], 20.00th=[ 2835], 00:36:04.787 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:36:04.787 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3294], 95.00th=[ 3851], 00:36:04.787 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 4948], 99.95th=[ 5014], 00:36:04.787 | 99.99th=[ 5276] 00:36:04.787 bw ( KiB/s): min=20640, max=22896, per=25.50%, avg=21130.67, stdev=696.78, samples=9 00:36:04.787 iops : min= 2580, max= 2862, avg=2641.33, stdev=87.10, samples=9 00:36:04.787 lat (usec) : 750=0.01% 00:36:04.787 lat (msec) : 2=2.26%, 4=93.31%, 10=4.42% 00:36:04.787 cpu : usr=96.00%, sys=3.64%, ctx=9, majf=0, minf=62 00:36:04.787 IO depths : 1=0.1%, 2=1.8%, 4=71.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 issued rwts: total=13247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:04.787 filename1: (groupid=0, jobs=1): err= 0: pid=1865368: Mon Jul 15 19:42:14 2024 00:36:04.787 read: IOPS=2627, BW=20.5MiB/s (21.5MB/s)(103MiB/5042msec) 00:36:04.787 slat (nsec): min=6173, max=33993, avg=8702.16, stdev=2934.30 00:36:04.787 clat (usec): min=790, max=42984, avg=3003.15, stdev=1197.10 00:36:04.787 lat (usec): min=802, max=43005, avg=3011.85, stdev=1197.06 00:36:04.787 clat percentiles (usec): 00:36:04.787 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2638], 20.00th=[ 2835], 00:36:04.787 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:36:04.787 | 70.00th=[ 3032], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3589], 00:36:04.787 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[42730], 00:36:04.787 | 99.99th=[42730] 00:36:04.787 bw ( KiB/s): min=19872, max=22032, per=25.57%, avg=21190.40, stdev=612.99, samples=10 00:36:04.787 iops : min= 2484, max= 2754, avg=2648.80, stdev=76.62, samples=10 00:36:04.787 lat (usec) : 1000=0.02% 00:36:04.787 lat (msec) : 2=0.72%, 4=95.99%, 10=3.19%, 50=0.08% 00:36:04.787 cpu : usr=95.66%, sys=3.99%, ctx=11, majf=0, minf=32 00:36:04.787 IO depths : 1=0.1%, 2=2.2%, 4=71.4%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 issued rwts: total=13247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:04.787 filename1: (groupid=0, jobs=1): err= 0: pid=1865369: Mon Jul 15 19:42:14 2024 00:36:04.787 read: IOPS=2600, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:36:04.787 slat (nsec): min=6190, max=32627, avg=9005.81, stdev=3109.75 00:36:04.787 clat (usec): min=1081, max=45506, avg=3050.45, stdev=1109.60 00:36:04.787 lat (usec): min=1087, max=45530, avg=3059.45, stdev=1109.63 00:36:04.787 clat percentiles (usec): 00:36:04.787 | 1.00th=[ 2212], 5.00th=[ 2638], 10.00th=[ 2737], 20.00th=[ 2835], 00:36:04.787 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 2999], 60.00th=[ 3032], 00:36:04.787 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3752], 00:36:04.787 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[45351], 00:36:04.787 | 99.99th=[45351] 00:36:04.787 bw ( KiB/s): min=18741, max=21472, per=25.13%, avg=20827.22, stdev=824.98, samples=9 00:36:04.787 iops : min= 2342, max= 2684, avg=2603.33, stdev=103.32, samples=9 00:36:04.787 lat (msec) : 2=0.42%, 4=96.19%, 10=3.33%, 50=0.06% 00:36:04.787 cpu : usr=95.54%, sys=4.12%, ctx=13, majf=0, minf=56 00:36:04.787 IO depths : 1=0.1%, 2=1.0%, 4=71.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.787 issued rwts: total=13006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:04.787 00:36:04.787 Run status group 0 (all jobs): 00:36:04.787 READ: bw=80.9MiB/s (84.9MB/s), 19.9MiB/s-20.7MiB/s (20.8MB/s-21.7MB/s), io=408MiB (428MB), run=5001-5042msec 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 00:36:04.787 real 0m24.180s 00:36:04.787 user 4m51.377s 00:36:04.787 sys 0m4.592s 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 ************************************ 00:36:04.787 END TEST fio_dif_rand_params 00:36:04.787 ************************************ 00:36:04.787 19:42:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:04.787 19:42:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:04.787 19:42:15 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:04.787 19:42:15 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 ************************************ 00:36:04.787 START TEST fio_dif_digest 00:36:04.787 ************************************ 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 bdev_null0 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.787 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:04.788 [2024-07-15 19:42:15.282196] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.788 { 00:36:04.788 "params": { 00:36:04.788 "name": "Nvme$subsystem", 00:36:04.788 "trtype": "$TEST_TRANSPORT", 00:36:04.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.788 "adrfam": "ipv4", 00:36:04.788 "trsvcid": "$NVMF_PORT", 00:36:04.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.788 "hdgst": ${hdgst:-false}, 00:36:04.788 "ddgst": ${ddgst:-false} 00:36:04.788 }, 00:36:04.788 "method": "bdev_nvme_attach_controller" 00:36:04.788 } 00:36:04.788 EOF 00:36:04.788 )") 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.788 "params": { 00:36:04.788 "name": "Nvme0", 00:36:04.788 "trtype": "tcp", 00:36:04.788 "traddr": "10.0.0.2", 00:36:04.788 "adrfam": "ipv4", 00:36:04.788 "trsvcid": "4420", 00:36:04.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.788 "hdgst": true, 00:36:04.788 "ddgst": true 00:36:04.788 }, 00:36:04.788 "method": "bdev_nvme_attach_controller" 00:36:04.788 }' 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.788 19:42:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.788 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:04.788 ... 00:36:04.788 fio-3.35 00:36:04.788 Starting 3 threads 00:36:05.046 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.250 00:36:17.250 filename0: (groupid=0, jobs=1): err= 0: pid=1866582: Mon Jul 15 19:42:26 2024 00:36:17.250 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(381MiB/10007msec) 00:36:17.250 slat (nsec): min=3020, max=48239, avg=11208.32, stdev=2219.63 00:36:17.250 clat (usec): min=5849, max=55847, avg=9838.44, stdev=1934.30 00:36:17.250 lat (usec): min=5861, max=55859, avg=9849.64, stdev=1934.39 00:36:17.250 clat percentiles (usec): 00:36:17.250 | 1.00th=[ 6718], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8586], 00:36:17.250 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:36:17.250 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:36:17.250 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17695], 99.95th=[55313], 00:36:17.250 | 99.99th=[55837] 00:36:17.250 bw ( KiB/s): min=34048, max=42240, per=37.76%, avg=38963.20, stdev=2080.75, samples=20 00:36:17.250 iops : min= 266, max= 330, avg=304.40, stdev=16.26, samples=20 00:36:17.250 lat (msec) : 10=45.98%, 20=53.92%, 100=0.10% 00:36:17.250 cpu : usr=93.96%, sys=5.60%, ctx=19, majf=0, minf=243 00:36:17.250 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 issued rwts: total=3047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.250 filename0: (groupid=0, jobs=1): err= 0: pid=1866583: Mon Jul 15 19:42:26 2024 00:36:17.250 read: IOPS=211, BW=26.5MiB/s (27.7MB/s)(266MiB/10046msec) 00:36:17.250 slat (nsec): min=4264, max=19132, avg=11281.72, stdev=2008.50 00:36:17.250 clat (usec): min=5779, max=96041, avg=14140.88, stdev=10157.63 00:36:17.250 lat (usec): min=5792, max=96053, avg=14152.16, stdev=10157.61 00:36:17.250 clat percentiles (usec): 00:36:17.250 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:36:17.250 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:36:17.250 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13435], 95.00th=[51643], 00:36:17.250 | 99.00th=[53740], 99.50th=[54264], 99.90th=[93848], 99.95th=[94897], 00:36:17.250 | 99.99th=[95945] 00:36:17.250 bw ( KiB/s): min=18432, max=33536, per=26.34%, avg=27177.00, stdev=4616.56, samples=20 00:36:17.250 iops : min= 144, max= 262, avg=212.30, stdev=36.07, samples=20 00:36:17.250 lat (msec) : 10=3.81%, 20=90.17%, 50=0.05%, 100=5.97% 00:36:17.250 cpu : usr=95.24%, sys=4.34%, ctx=22, majf=0, minf=101 00:36:17.250 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.250 filename0: (groupid=0, jobs=1): err= 0: pid=1866584: Mon Jul 15 19:42:26 2024 00:36:17.250 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10046msec) 00:36:17.250 slat (nsec): min=6502, max=25606, avg=11249.57, stdev=2214.49 00:36:17.250 clat (usec): min=6207, max=48678, avg=10274.38, stdev=1817.63 00:36:17.250 lat (usec): min=6214, max=48690, avg=10285.63, stdev=1817.80 00:36:17.250 clat percentiles (usec): 00:36:17.250 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8586], 00:36:17.250 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[10945], 00:36:17.250 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:36:17.250 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14877], 99.95th=[45876], 00:36:17.250 | 99.99th=[48497] 00:36:17.250 bw ( KiB/s): min=34304, max=41472, per=36.26%, avg=37414.40, stdev=2021.34, samples=20 00:36:17.250 iops : min= 268, max= 324, avg=292.30, stdev=15.79, samples=20 00:36:17.250 lat (msec) : 10=33.03%, 20=66.91%, 50=0.07% 00:36:17.250 cpu : usr=94.49%, sys=5.19%, ctx=18, majf=0, minf=91 00:36:17.250 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.250 issued rwts: total=2925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:17.250 00:36:17.250 Run status group 0 (all jobs): 00:36:17.250 READ: bw=101MiB/s (106MB/s), 26.5MiB/s-38.1MiB/s (27.7MB/s-39.9MB/s), io=1012MiB (1061MB), run=10007-10046msec 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.250 00:36:17.250 real 0m11.083s 00:36:17.250 user 0m35.201s 00:36:17.250 sys 0m1.837s 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.250 19:42:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.250 ************************************ 00:36:17.250 END TEST fio_dif_digest 00:36:17.250 ************************************ 00:36:17.250 19:42:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:17.250 19:42:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:17.250 19:42:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:17.250 rmmod nvme_tcp 00:36:17.250 rmmod nvme_fabrics 00:36:17.250 rmmod nvme_keyring 00:36:17.250 19:42:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1857688 ']' 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1857688 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1857688 ']' 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1857688 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1857688 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1857688' 00:36:17.251 killing process with pid 1857688 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1857688 00:36:17.251 19:42:26 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1857688 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:17.251 19:42:26 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:18.186 Waiting for block devices as requested 00:36:18.186 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:18.443 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:18.443 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:18.443 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:18.443 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:18.701 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:18.701 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:18.701 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:18.701 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:18.959 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:18.959 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:18.959 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:18.959 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:19.218 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:19.218 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:19.218 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:19.218 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:19.477 19:42:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:19.477 19:42:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:19.477 19:42:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:19.477 19:42:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:19.477 19:42:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.477 19:42:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:19.477 19:42:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.383 19:42:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:21.383 00:36:21.383 real 1m12.156s 00:36:21.383 user 7m8.396s 00:36:21.383 sys 0m18.656s 00:36:21.383 19:42:32 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:21.383 19:42:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:21.383 ************************************ 00:36:21.383 END TEST nvmf_dif 00:36:21.383 ************************************ 00:36:21.383 19:42:32 -- common/autotest_common.sh@1142 -- # return 0 00:36:21.383 19:42:32 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:21.383 19:42:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:21.383 19:42:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:21.383 19:42:32 -- common/autotest_common.sh@10 -- # set +x 00:36:21.383 ************************************ 00:36:21.383 START TEST nvmf_abort_qd_sizes 00:36:21.383 ************************************ 00:36:21.383 19:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:21.642 * Looking for test storage... 00:36:21.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.642 19:42:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:21.643 19:42:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:26.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:26.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:26.915 Found net devices under 0000:86:00.0: cvl_0_0 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:26.915 Found net devices under 0000:86:00.1: cvl_0_1 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:26.915 19:42:36 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.915 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.915 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.915 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:26.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:36:26.915 00:36:26.916 --- 10.0.0.2 ping statistics --- 00:36:26.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.916 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:36:26.916 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:36:26.916 00:36:26.916 --- 10.0.0.1 ping statistics --- 00:36:26.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.916 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:36:26.916 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.916 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:26.916 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:26.916 19:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:28.818 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:28.818 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:29.077 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:30.015 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1874143 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1874143 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1874143 ']' 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.015 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.015 [2024-07-15 19:42:40.765932] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:36:30.015 [2024-07-15 19:42:40.765971] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.015 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.015 [2024-07-15 19:42:40.797230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:30.015 [2024-07-15 19:42:40.827050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:30.274 [2024-07-15 19:42:40.870719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.274 [2024-07-15 19:42:40.870757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.274 [2024-07-15 19:42:40.870765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.274 [2024-07-15 19:42:40.870772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.274 [2024-07-15 19:42:40.870778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.274 [2024-07-15 19:42:40.870825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.274 [2024-07-15 19:42:40.870845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.274 [2024-07-15 19:42:40.870868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.274 [2024-07-15 19:42:40.870869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.274 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.274 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:30.274 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.274 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.274 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:36:30.274 19:42:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:30.275 19:42:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:30.275 19:42:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.275 19:42:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.275 ************************************ 00:36:30.275 START TEST spdk_target_abort 00:36:30.275 ************************************ 00:36:30.275 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:30.275 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:30.275 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:36:30.275 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.275 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 spdk_targetn1 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 [2024-07-15 19:42:43.884171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 [2024-07-15 19:42:43.913392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:33.561 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.561 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.841 Initializing NVMe Controllers 00:36:36.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:36.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:36.841 Initialization complete. Launching workers. 00:36:36.841 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14639, failed: 0 00:36:36.842 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1502, failed to submit 13137 00:36:36.842 success 757, unsuccess 745, failed 0 00:36:36.842 19:42:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:36.842 19:42:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:36.842 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.163 Initializing NVMe Controllers 00:36:40.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.163 Initialization complete. Launching workers. 00:36:40.163 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8676, failed: 0 00:36:40.163 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7424 00:36:40.163 success 314, unsuccess 938, failed 0 00:36:40.163 19:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.163 19:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.163 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.698 Initializing NVMe Controllers 00:36:42.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:42.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:42.698 Initialization complete. Launching workers. 00:36:42.698 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37989, failed: 0 00:36:42.698 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 35279 00:36:42.698 success 593, unsuccess 2117, failed 0 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.698 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1874143 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1874143 ']' 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1874143 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1874143 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1874143' 00:36:44.073 killing process with pid 1874143 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1874143 00:36:44.073 19:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1874143 00:36:44.333 00:36:44.333 real 0m13.981s 00:36:44.333 user 0m53.491s 00:36:44.333 sys 0m2.245s 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.333 ************************************ 00:36:44.333 END TEST spdk_target_abort 00:36:44.333 ************************************ 00:36:44.333 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:44.333 19:42:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:44.333 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:44.333 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.333 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.333 ************************************ 00:36:44.333 START TEST kernel_target_abort 00:36:44.333 ************************************ 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:44.333 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:46.869 Waiting for block devices as requested 00:36:46.869 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:46.869 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:47.128 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:47.128 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:47.128 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:47.128 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:47.388 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:47.388 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:47.388 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:47.388 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:47.647 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:47.647 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:47.647 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:47.907 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:47.908 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:47.908 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:47.908 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:48.166 No valid GPT data, bailing 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:48.166 00:36:48.166 Discovery Log Number of Records 2, Generation counter 2 00:36:48.166 =====Discovery Log Entry 0====== 00:36:48.166 trtype: tcp 00:36:48.166 adrfam: ipv4 00:36:48.166 subtype: current discovery subsystem 00:36:48.166 treq: not specified, sq flow control disable supported 00:36:48.166 portid: 1 00:36:48.166 trsvcid: 4420 00:36:48.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:48.166 traddr: 10.0.0.1 00:36:48.166 eflags: none 00:36:48.166 sectype: none 00:36:48.166 =====Discovery Log Entry 1====== 00:36:48.166 trtype: tcp 00:36:48.166 adrfam: ipv4 00:36:48.166 subtype: nvme subsystem 00:36:48.166 treq: not specified, sq flow control disable supported 00:36:48.166 portid: 1 00:36:48.166 trsvcid: 4420 00:36:48.166 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:48.166 traddr: 10.0.0.1 00:36:48.166 eflags: none 00:36:48.166 sectype: none 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.166 19:42:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.166 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.456 Initializing NVMe Controllers 00:36:51.456 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:51.456 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:51.456 Initialization complete. Launching workers. 00:36:51.456 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79145, failed: 0 00:36:51.456 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79145, failed to submit 0 00:36:51.456 success 0, unsuccess 79145, failed 0 00:36:51.456 19:43:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:51.456 19:43:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:51.456 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.744 Initializing NVMe Controllers 00:36:54.744 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.744 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.744 Initialization complete. Launching workers. 00:36:54.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 132190, failed: 0 00:36:54.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33238, failed to submit 98952 00:36:54.744 success 0, unsuccess 33238, failed 0 00:36:54.744 19:43:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.744 19:43:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.744 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.031 Initializing NVMe Controllers 00:36:58.031 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:58.031 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:58.031 Initialization complete. Launching workers. 00:36:58.031 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126701, failed: 0 00:36:58.031 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31682, failed to submit 95019 00:36:58.031 success 0, unsuccess 31682, failed 0 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:58.032 19:43:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:59.935 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:59.935 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:00.194 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:00.194 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:00.194 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:00.194 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:00.194 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:01.133 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:01.133 00:37:01.133 real 0m16.651s 00:37:01.133 user 0m7.906s 00:37:01.133 sys 0m4.855s 00:37:01.133 19:43:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:01.133 19:43:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.133 ************************************ 00:37:01.133 END TEST kernel_target_abort 00:37:01.133 ************************************ 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:01.133 rmmod nvme_tcp 00:37:01.133 rmmod nvme_fabrics 00:37:01.133 rmmod nvme_keyring 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1874143 ']' 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1874143 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1874143 ']' 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1874143 00:37:01.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1874143) - No such process 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1874143 is not found' 00:37:01.133 Process with pid 1874143 is not found 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:01.133 19:43:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:03.693 Waiting for block devices as requested 00:37:03.693 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:03.693 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:03.693 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:03.952 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:03.952 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:03.952 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:03.952 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:04.211 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:04.211 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:04.211 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:04.211 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:04.470 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:04.470 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:04.470 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:04.470 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:04.729 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:04.729 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:04.729 19:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.263 19:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:07.263 00:37:07.263 real 0m45.374s 00:37:07.263 user 1m4.964s 00:37:07.263 sys 0m14.537s 00:37:07.263 19:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:07.263 19:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:07.263 ************************************ 00:37:07.263 END TEST nvmf_abort_qd_sizes 00:37:07.263 ************************************ 00:37:07.263 19:43:17 -- common/autotest_common.sh@1142 -- # return 0 00:37:07.263 19:43:17 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:07.263 19:43:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:07.263 19:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:07.263 19:43:17 -- common/autotest_common.sh@10 -- # set +x 00:37:07.263 ************************************ 00:37:07.263 START TEST keyring_file 00:37:07.263 ************************************ 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:07.263 * Looking for test storage... 00:37:07.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.263 19:43:17 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.263 19:43:17 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.263 19:43:17 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.263 19:43:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.263 19:43:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.263 19:43:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.263 19:43:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:07.263 19:43:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MKwfBQ8IKK 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MKwfBQ8IKK 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MKwfBQ8IKK 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MKwfBQ8IKK 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uV4ZSH4LKo 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:07.263 19:43:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uV4ZSH4LKo 00:37:07.263 19:43:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uV4ZSH4LKo 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uV4ZSH4LKo 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=1882636 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:07.263 19:43:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1882636 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1882636 ']' 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:07.263 19:43:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.263 [2024-07-15 19:43:17.906712] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:37:07.263 [2024-07-15 19:43:17.906765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882636 ] 00:37:07.263 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.263 [2024-07-15 19:43:17.933266] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:07.263 [2024-07-15 19:43:17.961527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.263 [2024-07-15 19:43:18.001210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.521 19:43:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:07.521 19:43:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:07.521 19:43:18 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:07.521 19:43:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.521 19:43:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.521 [2024-07-15 19:43:18.187110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.521 null0 00:37:07.522 [2024-07-15 19:43:18.219162] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:07.522 [2024-07-15 19:43:18.219497] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:07.522 [2024-07-15 19:43:18.227173] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.522 19:43:18 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.522 [2024-07-15 19:43:18.239203] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:07.522 request: 00:37:07.522 { 00:37:07.522 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.522 "secure_channel": false, 00:37:07.522 "listen_address": { 00:37:07.522 "trtype": "tcp", 00:37:07.522 "traddr": "127.0.0.1", 00:37:07.522 "trsvcid": "4420" 00:37:07.522 }, 00:37:07.522 "method": "nvmf_subsystem_add_listener", 00:37:07.522 "req_id": 1 00:37:07.522 } 00:37:07.522 Got JSON-RPC error response 00:37:07.522 response: 00:37:07.522 { 00:37:07.522 "code": -32602, 00:37:07.522 "message": "Invalid parameters" 00:37:07.522 } 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:07.522 19:43:18 keyring_file -- keyring/file.sh@46 -- # bperfpid=1882674 00:37:07.522 19:43:18 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1882674 /var/tmp/bperf.sock 00:37:07.522 19:43:18 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1882674 ']' 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:07.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:07.522 19:43:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.522 [2024-07-15 19:43:18.292320] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:37:07.522 [2024-07-15 19:43:18.292361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882674 ] 00:37:07.522 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.522 [2024-07-15 19:43:18.318591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:07.522 [2024-07-15 19:43:18.347327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.781 [2024-07-15 19:43:18.388189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.781 19:43:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:07.781 19:43:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:07.781 19:43:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:07.781 19:43:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:08.039 19:43:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uV4ZSH4LKo 00:37:08.039 19:43:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uV4ZSH4LKo 00:37:08.039 19:43:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:08.039 19:43:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:08.039 19:43:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.039 19:43:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.039 19:43:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.298 19:43:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.MKwfBQ8IKK == \/\t\m\p\/\t\m\p\.\M\K\w\f\B\Q\8\I\K\K ]] 00:37:08.298 19:43:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:08.298 19:43:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:08.298 19:43:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.298 19:43:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.298 19:43:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.557 19:43:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uV4ZSH4LKo == \/\t\m\p\/\t\m\p\.\u\V\4\Z\S\H\4\L\K\o ]] 00:37:08.557 19:43:19 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.557 19:43:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:08.557 19:43:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.557 19:43:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.816 19:43:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:08.816 19:43:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:08.816 19:43:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.073 [2024-07-15 19:43:19.690722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:09.073 nvme0n1 00:37:09.073 19:43:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:09.073 19:43:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.073 19:43:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.073 19:43:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.073 19:43:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:09.074 19:43:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.332 19:43:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:09.332 19:43:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:09.332 19:43:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:09.332 19:43:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.332 19:43:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.332 19:43:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.332 19:43:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:09.332 19:43:20 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:09.332 19:43:20 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:09.590 Running I/O for 1 seconds... 00:37:10.548 00:37:10.548 Latency(us) 00:37:10.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.548 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:10.548 nvme0n1 : 1.01 13671.57 53.40 0.00 0.00 9335.36 5242.88 21655.37 00:37:10.548 =================================================================================================================== 00:37:10.548 Total : 13671.57 53.40 0.00 0.00 9335.36 5242.88 21655.37 00:37:10.548 0 00:37:10.548 19:43:21 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:10.548 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:10.807 19:43:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.807 19:43:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:10.807 19:43:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.807 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.067 19:43:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:11.067 19:43:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.067 19:43:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.067 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:11.326 [2024-07-15 19:43:21.972344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:11.327 [2024-07-15 19:43:21.973314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0bc10 (107): Transport endpoint is not connected 00:37:11.327 [2024-07-15 19:43:21.974310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0bc10 (9): Bad file descriptor 00:37:11.327 [2024-07-15 19:43:21.975311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.327 [2024-07-15 19:43:21.975322] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:11.327 [2024-07-15 19:43:21.975328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.327 request: 00:37:11.327 { 00:37:11.327 "name": "nvme0", 00:37:11.327 "trtype": "tcp", 00:37:11.327 "traddr": "127.0.0.1", 00:37:11.327 "adrfam": "ipv4", 00:37:11.327 "trsvcid": "4420", 00:37:11.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.327 "prchk_reftag": false, 00:37:11.327 "prchk_guard": false, 00:37:11.327 "hdgst": false, 00:37:11.327 "ddgst": false, 00:37:11.327 "psk": "key1", 00:37:11.327 "method": "bdev_nvme_attach_controller", 00:37:11.327 "req_id": 1 00:37:11.327 } 00:37:11.327 Got JSON-RPC error response 00:37:11.327 response: 00:37:11.327 { 00:37:11.327 "code": -5, 00:37:11.327 "message": "Input/output error" 00:37:11.327 } 00:37:11.327 19:43:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:11.327 19:43:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.327 19:43:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.327 19:43:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.327 19:43:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:11.327 19:43:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:11.327 19:43:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.327 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.327 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.327 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.327 19:43:22 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:11.327 19:43:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:11.327 19:43:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:11.327 19:43:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.327 19:43:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.327 19:43:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:11.327 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.585 19:43:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:11.585 19:43:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:11.585 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:11.843 19:43:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:11.843 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:12.102 19:43:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:12.102 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.102 19:43:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:12.102 19:43:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:12.102 19:43:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.MKwfBQ8IKK 00:37:12.102 19:43:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.102 19:43:22 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.102 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.360 [2024-07-15 19:43:23.034428] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MKwfBQ8IKK': 0100660 00:37:12.360 [2024-07-15 19:43:23.034452] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:12.360 request: 00:37:12.360 { 00:37:12.360 "name": "key0", 00:37:12.360 "path": "/tmp/tmp.MKwfBQ8IKK", 00:37:12.360 "method": "keyring_file_add_key", 00:37:12.360 "req_id": 1 00:37:12.360 } 00:37:12.360 Got JSON-RPC error response 00:37:12.360 response: 00:37:12.360 { 00:37:12.360 "code": -1, 00:37:12.360 "message": "Operation not permitted" 00:37:12.360 } 00:37:12.360 19:43:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.360 19:43:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.360 19:43:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.360 19:43:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.360 19:43:23 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.MKwfBQ8IKK 00:37:12.360 19:43:23 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.360 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MKwfBQ8IKK 00:37:12.623 19:43:23 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.MKwfBQ8IKK 00:37:12.623 19:43:23 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.623 19:43:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:12.623 19:43:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:12.623 19:43:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.623 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.881 [2024-07-15 19:43:23.563852] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MKwfBQ8IKK': No such file or directory 00:37:12.881 [2024-07-15 19:43:23.563876] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:12.881 [2024-07-15 19:43:23.563896] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:12.881 [2024-07-15 19:43:23.563902] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:12.881 [2024-07-15 19:43:23.563907] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:12.881 request: 00:37:12.881 { 00:37:12.881 "name": "nvme0", 00:37:12.881 "trtype": "tcp", 00:37:12.881 "traddr": "127.0.0.1", 00:37:12.881 "adrfam": "ipv4", 00:37:12.881 "trsvcid": "4420", 00:37:12.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:12.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:12.881 "prchk_reftag": false, 00:37:12.881 "prchk_guard": false, 00:37:12.881 "hdgst": false, 00:37:12.881 "ddgst": false, 00:37:12.881 "psk": "key0", 00:37:12.881 "method": "bdev_nvme_attach_controller", 00:37:12.881 "req_id": 1 00:37:12.881 } 00:37:12.881 Got JSON-RPC error response 00:37:12.881 response: 00:37:12.881 { 00:37:12.881 "code": -19, 00:37:12.881 "message": "No such device" 00:37:12.881 } 00:37:12.881 19:43:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:12.881 19:43:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.881 19:43:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.881 19:43:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.881 19:43:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:12.881 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:13.140 19:43:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:13.140 19:43:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fHn7nLEm3c 00:37:13.140 19:43:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.140 19:43:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.399 nvme0n1 00:37:13.399 19:43:24 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:13.399 19:43:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.399 19:43:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.399 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.399 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.399 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.658 19:43:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:13.658 19:43:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:13.658 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:13.917 19:43:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:13.917 19:43:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.917 19:43:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:13.917 19:43:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.917 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.176 19:43:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:14.176 19:43:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:14.177 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:14.436 19:43:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:14.436 19:43:25 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:14.436 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.436 19:43:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:14.436 19:43:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fHn7nLEm3c 00:37:14.436 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fHn7nLEm3c 00:37:14.694 19:43:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uV4ZSH4LKo 00:37:14.695 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uV4ZSH4LKo 00:37:14.954 19:43:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.954 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.954 nvme0n1 00:37:15.214 19:43:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:15.214 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:15.214 19:43:26 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:15.214 "subsystems": [ 00:37:15.214 { 00:37:15.214 "subsystem": "keyring", 00:37:15.214 "config": [ 00:37:15.214 { 00:37:15.214 "method": "keyring_file_add_key", 00:37:15.214 "params": { 00:37:15.214 "name": "key0", 00:37:15.214 "path": "/tmp/tmp.fHn7nLEm3c" 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "keyring_file_add_key", 00:37:15.214 "params": { 00:37:15.214 "name": "key1", 00:37:15.214 "path": "/tmp/tmp.uV4ZSH4LKo" 00:37:15.214 } 00:37:15.214 } 00:37:15.214 ] 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "subsystem": "iobuf", 00:37:15.214 "config": [ 00:37:15.214 { 00:37:15.214 "method": "iobuf_set_options", 00:37:15.214 "params": { 00:37:15.214 "small_pool_count": 8192, 00:37:15.214 "large_pool_count": 1024, 00:37:15.214 "small_bufsize": 8192, 00:37:15.214 "large_bufsize": 135168 00:37:15.214 } 00:37:15.214 } 00:37:15.214 ] 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "subsystem": "sock", 00:37:15.214 "config": [ 00:37:15.214 { 00:37:15.214 "method": "sock_set_default_impl", 00:37:15.214 "params": { 00:37:15.214 "impl_name": "posix" 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "sock_impl_set_options", 00:37:15.214 "params": { 00:37:15.214 "impl_name": "ssl", 00:37:15.214 "recv_buf_size": 4096, 00:37:15.214 "send_buf_size": 4096, 00:37:15.214 "enable_recv_pipe": true, 00:37:15.214 "enable_quickack": false, 00:37:15.214 "enable_placement_id": 0, 00:37:15.214 "enable_zerocopy_send_server": true, 00:37:15.214 "enable_zerocopy_send_client": false, 00:37:15.214 "zerocopy_threshold": 0, 00:37:15.214 "tls_version": 0, 00:37:15.214 "enable_ktls": false 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "sock_impl_set_options", 00:37:15.214 "params": { 00:37:15.214 "impl_name": "posix", 00:37:15.214 "recv_buf_size": 2097152, 00:37:15.214 "send_buf_size": 2097152, 00:37:15.214 "enable_recv_pipe": true, 00:37:15.214 "enable_quickack": false, 00:37:15.214 "enable_placement_id": 0, 00:37:15.214 "enable_zerocopy_send_server": true, 00:37:15.214 "enable_zerocopy_send_client": false, 00:37:15.214 "zerocopy_threshold": 0, 00:37:15.214 "tls_version": 0, 00:37:15.214 "enable_ktls": false 00:37:15.214 } 00:37:15.214 } 00:37:15.214 ] 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "subsystem": "vmd", 00:37:15.214 "config": [] 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "subsystem": "accel", 00:37:15.214 "config": [ 00:37:15.214 { 00:37:15.214 "method": "accel_set_options", 00:37:15.214 "params": { 00:37:15.214 "small_cache_size": 128, 00:37:15.214 "large_cache_size": 16, 00:37:15.214 "task_count": 2048, 00:37:15.214 "sequence_count": 2048, 00:37:15.214 "buf_count": 2048 00:37:15.214 } 00:37:15.214 } 00:37:15.214 ] 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "subsystem": "bdev", 00:37:15.214 "config": [ 00:37:15.214 { 00:37:15.214 "method": "bdev_set_options", 00:37:15.214 "params": { 00:37:15.214 "bdev_io_pool_size": 65535, 00:37:15.214 "bdev_io_cache_size": 256, 00:37:15.214 "bdev_auto_examine": true, 00:37:15.214 "iobuf_small_cache_size": 128, 00:37:15.214 "iobuf_large_cache_size": 16 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "bdev_raid_set_options", 00:37:15.214 "params": { 00:37:15.214 "process_window_size_kb": 1024 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "bdev_iscsi_set_options", 00:37:15.214 "params": { 00:37:15.214 "timeout_sec": 30 00:37:15.214 } 00:37:15.214 }, 00:37:15.214 { 00:37:15.214 "method": "bdev_nvme_set_options", 00:37:15.214 "params": { 00:37:15.214 "action_on_timeout": "none", 00:37:15.214 "timeout_us": 0, 00:37:15.214 "timeout_admin_us": 0, 00:37:15.214 "keep_alive_timeout_ms": 10000, 00:37:15.214 "arbitration_burst": 0, 00:37:15.214 "low_priority_weight": 0, 00:37:15.214 "medium_priority_weight": 0, 00:37:15.214 "high_priority_weight": 0, 00:37:15.214 "nvme_adminq_poll_period_us": 10000, 00:37:15.214 "nvme_ioq_poll_period_us": 0, 00:37:15.215 "io_queue_requests": 512, 00:37:15.215 "delay_cmd_submit": true, 00:37:15.215 "transport_retry_count": 4, 00:37:15.215 "bdev_retry_count": 3, 00:37:15.215 "transport_ack_timeout": 0, 00:37:15.215 "ctrlr_loss_timeout_sec": 0, 00:37:15.215 "reconnect_delay_sec": 0, 00:37:15.215 "fast_io_fail_timeout_sec": 0, 00:37:15.215 "disable_auto_failback": false, 00:37:15.215 "generate_uuids": false, 00:37:15.215 "transport_tos": 0, 00:37:15.215 "nvme_error_stat": false, 00:37:15.215 "rdma_srq_size": 0, 00:37:15.215 "io_path_stat": false, 00:37:15.215 "allow_accel_sequence": false, 00:37:15.215 "rdma_max_cq_size": 0, 00:37:15.215 "rdma_cm_event_timeout_ms": 0, 00:37:15.215 "dhchap_digests": [ 00:37:15.215 "sha256", 00:37:15.215 "sha384", 00:37:15.215 "sha512" 00:37:15.215 ], 00:37:15.215 "dhchap_dhgroups": [ 00:37:15.215 "null", 00:37:15.215 "ffdhe2048", 00:37:15.215 "ffdhe3072", 00:37:15.215 "ffdhe4096", 00:37:15.215 "ffdhe6144", 00:37:15.215 "ffdhe8192" 00:37:15.215 ] 00:37:15.215 } 00:37:15.215 }, 00:37:15.215 { 00:37:15.215 "method": "bdev_nvme_attach_controller", 00:37:15.215 "params": { 00:37:15.215 "name": "nvme0", 00:37:15.215 "trtype": "TCP", 00:37:15.215 "adrfam": "IPv4", 00:37:15.215 "traddr": "127.0.0.1", 00:37:15.215 "trsvcid": "4420", 00:37:15.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.215 "prchk_reftag": false, 00:37:15.215 "prchk_guard": false, 00:37:15.215 "ctrlr_loss_timeout_sec": 0, 00:37:15.215 "reconnect_delay_sec": 0, 00:37:15.215 "fast_io_fail_timeout_sec": 0, 00:37:15.215 "psk": "key0", 00:37:15.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.215 "hdgst": false, 00:37:15.215 "ddgst": false 00:37:15.215 } 00:37:15.215 }, 00:37:15.215 { 00:37:15.215 "method": "bdev_nvme_set_hotplug", 00:37:15.215 "params": { 00:37:15.215 "period_us": 100000, 00:37:15.215 "enable": false 00:37:15.215 } 00:37:15.215 }, 00:37:15.215 { 00:37:15.215 "method": "bdev_wait_for_examine" 00:37:15.215 } 00:37:15.215 ] 00:37:15.215 }, 00:37:15.215 { 00:37:15.215 "subsystem": "nbd", 00:37:15.215 "config": [] 00:37:15.215 } 00:37:15.215 ] 00:37:15.215 }' 00:37:15.215 19:43:26 keyring_file -- keyring/file.sh@114 -- # killprocess 1882674 00:37:15.215 19:43:26 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1882674 ']' 00:37:15.215 19:43:26 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1882674 00:37:15.215 19:43:26 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:15.215 19:43:26 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.215 19:43:26 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882674 00:37:15.474 19:43:26 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.474 19:43:26 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.474 19:43:26 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882674' 00:37:15.474 killing process with pid 1882674 00:37:15.474 19:43:26 keyring_file -- common/autotest_common.sh@967 -- # kill 1882674 00:37:15.474 Received shutdown signal, test time was about 1.000000 seconds 00:37:15.474 00:37:15.474 Latency(us) 00:37:15.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.474 =================================================================================================================== 00:37:15.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.474 19:43:26 keyring_file -- common/autotest_common.sh@972 -- # wait 1882674 00:37:15.475 19:43:26 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:15.475 19:43:26 keyring_file -- keyring/file.sh@117 -- # bperfpid=1883971 00:37:15.475 19:43:26 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:15.475 "subsystems": [ 00:37:15.475 { 00:37:15.475 "subsystem": "keyring", 00:37:15.475 "config": [ 00:37:15.475 { 00:37:15.475 "method": "keyring_file_add_key", 00:37:15.475 "params": { 00:37:15.475 "name": "key0", 00:37:15.475 "path": "/tmp/tmp.fHn7nLEm3c" 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "keyring_file_add_key", 00:37:15.475 "params": { 00:37:15.475 "name": "key1", 00:37:15.475 "path": "/tmp/tmp.uV4ZSH4LKo" 00:37:15.475 } 00:37:15.475 } 00:37:15.475 ] 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "subsystem": "iobuf", 00:37:15.475 "config": [ 00:37:15.475 { 00:37:15.475 "method": "iobuf_set_options", 00:37:15.475 "params": { 00:37:15.475 "small_pool_count": 8192, 00:37:15.475 "large_pool_count": 1024, 00:37:15.475 "small_bufsize": 8192, 00:37:15.475 "large_bufsize": 135168 00:37:15.475 } 00:37:15.475 } 00:37:15.475 ] 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "subsystem": "sock", 00:37:15.475 "config": [ 00:37:15.475 { 00:37:15.475 "method": "sock_set_default_impl", 00:37:15.475 "params": { 00:37:15.475 "impl_name": "posix" 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "sock_impl_set_options", 00:37:15.475 "params": { 00:37:15.475 "impl_name": "ssl", 00:37:15.475 "recv_buf_size": 4096, 00:37:15.475 "send_buf_size": 4096, 00:37:15.475 "enable_recv_pipe": true, 00:37:15.475 "enable_quickack": false, 00:37:15.475 "enable_placement_id": 0, 00:37:15.475 "enable_zerocopy_send_server": true, 00:37:15.475 "enable_zerocopy_send_client": false, 00:37:15.475 "zerocopy_threshold": 0, 00:37:15.475 "tls_version": 0, 00:37:15.475 "enable_ktls": false 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "sock_impl_set_options", 00:37:15.475 "params": { 00:37:15.475 "impl_name": "posix", 00:37:15.475 "recv_buf_size": 2097152, 00:37:15.475 "send_buf_size": 2097152, 00:37:15.475 "enable_recv_pipe": true, 00:37:15.475 "enable_quickack": false, 00:37:15.475 "enable_placement_id": 0, 00:37:15.475 "enable_zerocopy_send_server": true, 00:37:15.475 "enable_zerocopy_send_client": false, 00:37:15.475 "zerocopy_threshold": 0, 00:37:15.475 "tls_version": 0, 00:37:15.475 "enable_ktls": false 00:37:15.475 } 00:37:15.475 } 00:37:15.475 ] 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "subsystem": "vmd", 00:37:15.475 "config": [] 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "subsystem": "accel", 00:37:15.475 "config": [ 00:37:15.475 { 00:37:15.475 "method": "accel_set_options", 00:37:15.475 "params": { 00:37:15.475 "small_cache_size": 128, 00:37:15.475 "large_cache_size": 16, 00:37:15.475 "task_count": 2048, 00:37:15.475 "sequence_count": 2048, 00:37:15.475 "buf_count": 2048 00:37:15.475 } 00:37:15.475 } 00:37:15.475 ] 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "subsystem": "bdev", 00:37:15.475 "config": [ 00:37:15.475 { 00:37:15.475 "method": "bdev_set_options", 00:37:15.475 "params": { 00:37:15.475 "bdev_io_pool_size": 65535, 00:37:15.475 "bdev_io_cache_size": 256, 00:37:15.475 "bdev_auto_examine": true, 00:37:15.475 "iobuf_small_cache_size": 128, 00:37:15.475 "iobuf_large_cache_size": 16 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "bdev_raid_set_options", 00:37:15.475 "params": { 00:37:15.475 "process_window_size_kb": 1024 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "bdev_iscsi_set_options", 00:37:15.475 "params": { 00:37:15.475 "timeout_sec": 30 00:37:15.475 } 00:37:15.475 }, 00:37:15.475 { 00:37:15.475 "method": "bdev_nvme_set_options", 00:37:15.475 "params": { 00:37:15.475 "action_on_timeout": "none", 00:37:15.475 "timeout_us": 0, 00:37:15.475 "timeout_admin_us": 0, 00:37:15.475 "keep_alive_timeout_ms": 10000, 00:37:15.475 "arbitration_burst": 0, 00:37:15.475 "low_priority_weight": 0, 00:37:15.475 "medium_priority_weight": 0, 00:37:15.475 "high_priority_weight": 0, 00:37:15.475 "nvme_adminq_poll_period_us": 10000, 00:37:15.475 "nvme_ioq_poll_period_us": 0, 00:37:15.475 "io_queue_requests": 512, 00:37:15.475 "delay_cmd_submit": true, 00:37:15.475 "transport_retry_count": 4, 00:37:15.475 "bdev_retry_count": 3, 00:37:15.475 "transport_ack_timeout": 0, 00:37:15.475 "ctrlr_loss_timeout_sec": 0, 00:37:15.475 "reconnect_delay_sec": 0, 00:37:15.475 "fast_io_fail_timeout_sec": 0, 00:37:15.475 "disable_auto_failback": false, 00:37:15.475 "generate_uuids": false, 00:37:15.475 "transport_tos": 0, 00:37:15.475 "nvme_error_stat": false, 00:37:15.475 "rdma_srq_size": 0, 00:37:15.475 "io_path_stat": false, 00:37:15.475 "allow_accel_sequence": false, 00:37:15.475 19:43:26 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1883971 /var/tmp/bperf.sock 00:37:15.475 "rdma_max_cq_size": 0, 00:37:15.475 "rdma_cm_event_timeout_ms": 0, 00:37:15.475 "dhchap_digests": [ 00:37:15.475 "sha256", 00:37:15.475 "sha384", 00:37:15.475 "sha512" 00:37:15.475 ], 00:37:15.475 "dhchap_dhgroups": [ 00:37:15.475 "null", 00:37:15.475 "ffdhe2048", 00:37:15.475 "ffdhe3072", 00:37:15.475 "ffdhe4096", 00:37:15.475 "ffdhe6144", 00:37:15.476 "ffdhe8192" 00:37:15.476 ] 00:37:15.476 } 00:37:15.476 }, 00:37:15.476 { 00:37:15.476 "method": "bdev_nvme_attach_controller", 00:37:15.476 "params": { 00:37:15.476 "name": "nvme0", 00:37:15.476 "trtype": "TCP", 00:37:15.476 "adrfam": "IPv4", 00:37:15.476 "traddr": "127.0.0.1", 00:37:15.476 "trsvcid": "4420", 00:37:15.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.476 "prchk_reftag": false, 00:37:15.476 "prchk_guard": false, 00:37:15.476 "ctrlr_loss_timeout_sec": 0, 00:37:15.476 "reconnect_delay_sec": 0, 00:37:15.476 "fast_io_fail_timeout_sec": 0, 00:37:15.476 "psk": "key0", 00:37:15.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.476 "hdgst": false, 00:37:15.476 "ddgst": false 00:37:15.476 } 00:37:15.476 }, 00:37:15.476 { 00:37:15.476 "method": "bdev_nvme_set_hotplug", 00:37:15.476 "params": { 00:37:15.476 "period_us": 100000, 00:37:15.476 "enable": false 00:37:15.476 } 00:37:15.476 }, 00:37:15.476 { 00:37:15.476 "method": "bdev_wait_for_examine" 00:37:15.476 } 00:37:15.476 ] 00:37:15.476 }, 00:37:15.476 { 00:37:15.476 "subsystem": "nbd", 00:37:15.476 "config": [] 00:37:15.476 } 00:37:15.476 ] 00:37:15.476 }' 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1883971 ']' 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:15.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.476 19:43:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:15.476 [2024-07-15 19:43:26.300201] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:37:15.476 [2024-07-15 19:43:26.300256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883971 ] 00:37:15.476 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.476 [2024-07-15 19:43:26.326691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:15.735 [2024-07-15 19:43:26.351448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.735 [2024-07-15 19:43:26.393035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.735 [2024-07-15 19:43:26.547493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:16.302 19:43:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.302 19:43:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:16.302 19:43:27 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:16.302 19:43:27 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:16.302 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.560 19:43:27 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:16.560 19:43:27 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:16.560 19:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.560 19:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.560 19:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.560 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.560 19:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.819 19:43:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:16.819 19:43:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.819 19:43:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:16.819 19:43:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:16.819 19:43:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:16.819 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:17.077 19:43:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:17.077 19:43:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:17.077 19:43:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fHn7nLEm3c /tmp/tmp.uV4ZSH4LKo 00:37:17.077 19:43:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1883971 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1883971 ']' 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1883971 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883971 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883971' 00:37:17.077 killing process with pid 1883971 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@967 -- # kill 1883971 00:37:17.077 Received shutdown signal, test time was about 1.000000 seconds 00:37:17.077 00:37:17.077 Latency(us) 00:37:17.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.077 =================================================================================================================== 00:37:17.077 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:17.077 19:43:27 keyring_file -- common/autotest_common.sh@972 -- # wait 1883971 00:37:17.335 19:43:28 keyring_file -- keyring/file.sh@21 -- # killprocess 1882636 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1882636 ']' 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1882636 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882636 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882636' 00:37:17.335 killing process with pid 1882636 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@967 -- # kill 1882636 00:37:17.335 [2024-07-15 19:43:28.098724] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:17.335 19:43:28 keyring_file -- common/autotest_common.sh@972 -- # wait 1882636 00:37:17.625 00:37:17.625 real 0m10.749s 00:37:17.625 user 0m26.151s 00:37:17.625 sys 0m2.655s 00:37:17.625 19:43:28 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.625 19:43:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.625 ************************************ 00:37:17.625 END TEST keyring_file 00:37:17.625 ************************************ 00:37:17.625 19:43:28 -- common/autotest_common.sh@1142 -- # return 0 00:37:17.625 19:43:28 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:17.625 19:43:28 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.625 19:43:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:17.625 19:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.625 19:43:28 -- common/autotest_common.sh@10 -- # set +x 00:37:17.625 ************************************ 00:37:17.625 START TEST keyring_linux 00:37:17.625 ************************************ 00:37:17.625 19:43:28 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.885 * Looking for test storage... 00:37:17.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.885 19:43:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.885 19:43:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.885 19:43:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.885 19:43:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.885 19:43:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.885 19:43:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.885 19:43:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:17.885 19:43:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:17.885 /tmp/:spdk-test:key0 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.885 19:43:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:17.885 19:43:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:17.885 /tmp/:spdk-test:key1 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1884512 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:17.885 19:43:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1884512 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1884512 ']' 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.885 19:43:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:17.885 [2024-07-15 19:43:28.697808] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:37:17.885 [2024-07-15 19:43:28.697858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884512 ] 00:37:17.885 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.885 [2024-07-15 19:43:28.723641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:18.144 [2024-07-15 19:43:28.751055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.144 [2024-07-15 19:43:28.791910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.144 19:43:28 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.144 19:43:28 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.144 19:43:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:18.144 19:43:28 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.144 19:43:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.144 [2024-07-15 19:43:28.975545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.144 null0 00:37:18.402 [2024-07-15 19:43:29.007602] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:18.402 [2024-07-15 19:43:29.007929] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:18.402 903353677 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:18.402 759566248 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1884520 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1884520 /var/tmp/bperf.sock 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1884520 ']' 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.402 [2024-07-15 19:43:29.077739] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.07.0-rc2 initialization... 00:37:18.402 [2024-07-15 19:43:29.077780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884520 ] 00:37:18.402 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.402 [2024-07-15 19:43:29.103121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:18.402 [2024-07-15 19:43:29.131267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.402 [2024-07-15 19:43:29.170417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.402 19:43:29 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.402 19:43:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:18.402 19:43:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:18.660 19:43:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:18.660 19:43:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.919 19:43:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:18.919 19:43:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:18.919 [2024-07-15 19:43:29.744255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:19.177 nvme0n1 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:19.177 19:43:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:19.177 19:43:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:19.177 19:43:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:19.177 19:43:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:19.177 19:43:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.177 19:43:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.177 19:43:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@25 -- # sn=903353677 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 903353677 == \9\0\3\3\5\3\6\7\7 ]] 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 903353677 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:19.436 19:43:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.436 Running I/O for 1 seconds... 00:37:20.816 00:37:20.816 Latency(us) 00:37:20.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.816 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:20.816 nvme0n1 : 1.01 14128.67 55.19 0.00 0.00 9023.04 6639.08 16868.40 00:37:20.816 =================================================================================================================== 00:37:20.816 Total : 14128.67 55.19 0.00 0.00 9023.04 6639.08 16868.40 00:37:20.816 0 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:20.816 19:43:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:20.816 19:43:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:20.816 19:43:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.078 19:43:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.078 [2024-07-15 19:43:31.829554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.078 [2024-07-15 19:43:31.830286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e2bf0 (107): Transport endpoint is not connected 00:37:21.078 [2024-07-15 19:43:31.831282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e2bf0 (9): Bad file descriptor 00:37:21.078 [2024-07-15 19:43:31.832282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:21.078 [2024-07-15 19:43:31.832292] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.078 [2024-07-15 19:43:31.832299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:21.078 request: 00:37:21.078 { 00:37:21.078 "name": "nvme0", 00:37:21.078 "trtype": "tcp", 00:37:21.078 "traddr": "127.0.0.1", 00:37:21.078 "adrfam": "ipv4", 00:37:21.078 "trsvcid": "4420", 00:37:21.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.078 "prchk_reftag": false, 00:37:21.078 "prchk_guard": false, 00:37:21.078 "hdgst": false, 00:37:21.078 "ddgst": false, 00:37:21.078 "psk": ":spdk-test:key1", 00:37:21.078 "method": "bdev_nvme_attach_controller", 00:37:21.078 "req_id": 1 00:37:21.078 } 00:37:21.078 Got JSON-RPC error response 00:37:21.078 response: 00:37:21.078 { 00:37:21.078 "code": -5, 00:37:21.078 "message": "Input/output error" 00:37:21.078 } 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@33 -- # sn=903353677 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 903353677 00:37:21.078 1 links removed 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@33 -- # sn=759566248 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 759566248 00:37:21.078 1 links removed 00:37:21.078 19:43:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1884520 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1884520 ']' 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1884520 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884520 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884520' 00:37:21.078 killing process with pid 1884520 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@967 -- # kill 1884520 00:37:21.078 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.078 00:37:21.078 Latency(us) 00:37:21.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.078 =================================================================================================================== 00:37:21.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.078 19:43:31 keyring_linux -- common/autotest_common.sh@972 -- # wait 1884520 00:37:21.374 19:43:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1884512 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1884512 ']' 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1884512 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884512 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884512' 00:37:21.374 killing process with pid 1884512 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@967 -- # kill 1884512 00:37:21.374 19:43:32 keyring_linux -- common/autotest_common.sh@972 -- # wait 1884512 00:37:21.636 00:37:21.636 real 0m3.969s 00:37:21.636 user 0m6.993s 00:37:21.636 sys 0m1.369s 00:37:21.636 19:43:32 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:21.636 19:43:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:21.636 ************************************ 00:37:21.636 END TEST keyring_linux 00:37:21.636 ************************************ 00:37:21.636 19:43:32 -- common/autotest_common.sh@1142 -- # return 0 00:37:21.636 19:43:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:21.636 19:43:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:21.636 19:43:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:21.636 19:43:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:21.636 19:43:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:21.636 19:43:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:21.636 19:43:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:21.636 19:43:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:21.636 19:43:32 -- common/autotest_common.sh@10 -- # set +x 00:37:21.636 19:43:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:21.636 19:43:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:21.636 19:43:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:21.636 19:43:32 -- common/autotest_common.sh@10 -- # set +x 00:37:26.916 INFO: APP EXITING 00:37:26.916 INFO: killing all VMs 00:37:26.916 INFO: killing vhost app 00:37:26.916 INFO: EXIT DONE 00:37:28.296 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:28.296 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:28.296 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:28.555 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:28.555 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:30.461 Cleaning 00:37:30.461 Removing: /var/run/dpdk/spdk0/config 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:30.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:30.721 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:30.721 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:30.721 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:30.721 Removing: /var/run/dpdk/spdk1/config 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:30.721 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:30.721 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:30.721 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:30.721 Removing: /var/run/dpdk/spdk2/config 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:30.721 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:30.721 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:30.721 Removing: /var/run/dpdk/spdk3/config 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:30.721 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:30.721 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:30.721 Removing: /var/run/dpdk/spdk4/config 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:30.721 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:30.721 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:30.721 Removing: /dev/shm/bdev_svc_trace.1 00:37:30.721 Removing: /dev/shm/nvmf_trace.0 00:37:30.721 Removing: /dev/shm/spdk_tgt_trace.pid1424775 00:37:30.721 Removing: /var/run/dpdk/spdk0 00:37:30.721 Removing: /var/run/dpdk/spdk1 00:37:30.721 Removing: /var/run/dpdk/spdk2 00:37:30.721 Removing: /var/run/dpdk/spdk3 00:37:30.721 Removing: /var/run/dpdk/spdk4 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1422643 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1423604 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1424775 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1425187 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1426128 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1426364 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1427335 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1427351 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1427686 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1429191 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1430457 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1430738 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1431019 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1431128 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1431379 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1431631 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1431877 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1432161 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1432901 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1435674 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1435920 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1436174 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1436183 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1436673 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1436678 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437173 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437180 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437536 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437664 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437827 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1437925 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1438283 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1438589 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1438933 00:37:30.721 Removing: /var/run/dpdk/spdk_pid1439192 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1439217 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1439437 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1439671 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1439916 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1440309 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1440746 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1441014 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1441250 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1441483 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1441737 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1441969 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1442200 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1442441 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1442682 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1442927 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1443176 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1443421 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1443668 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1443925 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1444177 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1444424 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1444677 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1444955 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1445083 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1448679 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1528200 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1532321 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1542291 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1547455 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1551233 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1551922 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1557912 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1563477 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1563518 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1564391 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1565302 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1566339 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1566946 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1566985 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1567475 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1567664 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1567666 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1568579 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1569476 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1570242 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1570881 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1570883 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1571117 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1572120 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1573100 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1581183 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1581432 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1585579 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1591084 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1593670 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1603587 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1612155 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1614365 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1615282 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1631637 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1635317 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1660332 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1664591 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1666193 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1667950 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1668037 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1668084 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1668282 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1668578 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1670389 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1671142 00:37:30.981 Removing: /var/run/dpdk/spdk_pid1671416 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1673513 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1674000 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1674613 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1678759 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1683908 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1688899 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1724738 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1728548 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1735096 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1736181 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1737714 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1741779 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1745560 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1752898 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1752900 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1757364 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1757502 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1757741 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1758073 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1758116 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1759475 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1761198 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1762887 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1764484 00:37:30.982 Removing: /var/run/dpdk/spdk_pid1766086 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1767690 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1773539 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1774146 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1776369 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1777202 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1782889 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1785423 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1790601 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1795874 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1804013 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1810946 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1810966 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1829079 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1829746 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1830232 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1830708 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1831443 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1831922 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1832396 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1833041 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1837081 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1837326 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1843161 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1843220 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1845455 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1852766 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1852897 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1857735 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1859701 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1861667 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1862715 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1864967 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1866277 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1874757 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1875213 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1875675 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1877931 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1878397 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1878861 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1882636 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1882674 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1883971 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1884512 00:37:31.241 Removing: /var/run/dpdk/spdk_pid1884520 00:37:31.241 Clean 00:37:31.241 19:43:42 -- common/autotest_common.sh@1451 -- # return 0 00:37:31.241 19:43:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:31.241 19:43:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:31.241 19:43:42 -- common/autotest_common.sh@10 -- # set +x 00:37:31.241 19:43:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:31.241 19:43:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:31.241 19:43:42 -- common/autotest_common.sh@10 -- # set +x 00:37:31.241 19:43:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:31.241 19:43:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:31.241 19:43:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:31.241 19:43:42 -- spdk/autotest.sh@391 -- # hash lcov 00:37:31.241 19:43:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:31.241 19:43:42 -- spdk/autotest.sh@393 -- # hostname 00:37:31.242 19:43:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:31.501 geninfo: WARNING: invalid characters removed from testname! 00:37:53.440 19:44:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:54.008 19:44:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.912 19:44:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.817 19:44:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:59.722 19:44:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.628 19:44:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:03.069 19:44:13 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:03.329 19:44:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.329 19:44:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:03.329 19:44:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.329 19:44:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.329 19:44:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.329 19:44:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.329 19:44:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.329 19:44:13 -- paths/export.sh@5 -- $ export PATH 00:38:03.329 19:44:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.329 19:44:13 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:03.329 19:44:13 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:03.329 19:44:13 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721065453.XXXXXX 00:38:03.329 19:44:13 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721065453.V4FpRz 00:38:03.329 19:44:13 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:03.329 19:44:13 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:38:03.329 19:44:13 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:03.329 19:44:13 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:03.329 19:44:13 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:03.329 19:44:13 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:03.329 19:44:13 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:03.329 19:44:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:03.329 19:44:13 -- common/autotest_common.sh@10 -- $ set +x 00:38:03.329 19:44:13 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:03.329 19:44:13 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:03.329 19:44:13 -- pm/common@17 -- $ local monitor 00:38:03.329 19:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.329 19:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.329 19:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.329 19:44:13 -- pm/common@21 -- $ date +%s 00:38:03.329 19:44:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.329 19:44:13 -- pm/common@21 -- $ date +%s 00:38:03.329 19:44:13 -- pm/common@25 -- $ sleep 1 00:38:03.329 19:44:13 -- pm/common@21 -- $ date +%s 00:38:03.329 19:44:13 -- pm/common@21 -- $ date +%s 00:38:03.329 19:44:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721065454 00:38:03.329 19:44:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721065454 00:38:03.329 19:44:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721065454 00:38:03.329 19:44:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721065454 00:38:03.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721065454_collect-vmstat.pm.log 00:38:03.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721065454_collect-cpu-load.pm.log 00:38:03.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721065454_collect-cpu-temp.pm.log 00:38:03.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721065454_collect-bmc-pm.bmc.pm.log 00:38:04.267 19:44:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:04.267 19:44:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:38:04.267 19:44:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:04.267 19:44:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:04.267 19:44:15 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:04.267 19:44:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:04.267 19:44:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:04.267 19:44:15 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:04.267 19:44:15 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:04.267 19:44:15 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:04.267 19:44:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:04.267 19:44:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:04.267 19:44:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:04.267 19:44:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:04.267 19:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.267 19:44:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:04.267 19:44:15 -- pm/common@44 -- $ pid=1895026 00:38:04.267 19:44:15 -- pm/common@50 -- $ kill -TERM 1895026 00:38:04.267 19:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.267 19:44:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:04.267 19:44:15 -- pm/common@44 -- $ pid=1895028 00:38:04.267 19:44:15 -- pm/common@50 -- $ kill -TERM 1895028 00:38:04.267 19:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.267 19:44:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:04.267 19:44:15 -- pm/common@44 -- $ pid=1895030 00:38:04.267 19:44:15 -- pm/common@50 -- $ kill -TERM 1895030 00:38:04.267 19:44:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.267 19:44:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:04.267 19:44:15 -- pm/common@44 -- $ pid=1895053 00:38:04.267 19:44:15 -- pm/common@50 -- $ sudo -E kill -TERM 1895053 00:38:04.267 + [[ -n 1304124 ]] 00:38:04.267 + sudo kill 1304124 00:38:04.276 [Pipeline] } 00:38:04.294 [Pipeline] // stage 00:38:04.299 [Pipeline] } 00:38:04.317 [Pipeline] // timeout 00:38:04.323 [Pipeline] } 00:38:04.342 [Pipeline] // catchError 00:38:04.347 [Pipeline] } 00:38:04.365 [Pipeline] // wrap 00:38:04.370 [Pipeline] } 00:38:04.383 [Pipeline] // catchError 00:38:04.390 [Pipeline] stage 00:38:04.392 [Pipeline] { (Epilogue) 00:38:04.403 [Pipeline] catchError 00:38:04.405 [Pipeline] { 00:38:04.417 [Pipeline] echo 00:38:04.419 Cleanup processes 00:38:04.425 [Pipeline] sh 00:38:04.709 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:04.709 1895144 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:04.709 1895425 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:04.722 [Pipeline] sh 00:38:05.001 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.001 ++ grep -v 'sudo pgrep' 00:38:05.001 ++ awk '{print $1}' 00:38:05.001 + sudo kill -9 1895144 00:38:05.013 [Pipeline] sh 00:38:05.296 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:15.287 [Pipeline] sh 00:38:15.573 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:15.573 Artifacts sizes are good 00:38:15.590 [Pipeline] archiveArtifacts 00:38:15.599 Archiving artifacts 00:38:15.824 [Pipeline] sh 00:38:16.107 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:16.123 [Pipeline] cleanWs 00:38:16.134 [WS-CLEANUP] Deleting project workspace... 00:38:16.134 [WS-CLEANUP] Deferred wipeout is used... 00:38:16.140 [WS-CLEANUP] done 00:38:16.143 [Pipeline] } 00:38:16.166 [Pipeline] // catchError 00:38:16.180 [Pipeline] sh 00:38:16.460 + logger -p user.info -t JENKINS-CI 00:38:16.468 [Pipeline] } 00:38:16.484 [Pipeline] // stage 00:38:16.488 [Pipeline] } 00:38:16.506 [Pipeline] // node 00:38:16.511 [Pipeline] End of Pipeline 00:38:16.549 Finished: SUCCESS